Comments

  • The Problem of Resemblences
    Space in and of itself is very important to agents... agents need to manipulate their environment, and space is essentially where the environment is. Many (all?) of our senses are "affixed" into our sense of space somehow; those that don't "locate" still "feel" local (e.g., smell)... we have a sense of our position in space, and it is "on" this position where our sense of touch/texture normally resides (though we can extend this... with ye ol using an pencil to feel something beyond our fingers trick, we can project tactile sensations off of our skin... fun experiment).

    It would seem to me that the resemblance feature of sight being discussed here is a unique feature about how sight allocates things into space. Sight feels almost magical in this sense... with sight, our sensory experiences extend far beyond our bodies. Sight is not quite unique in this regard... we can hear things and allocate that into space, so hearing "reaches out" far beyond our bodies just as sight does. But for humans in particular, sight is unique in its precision of allocation of sensed objects into space... we can ascertain an object's shapes, motions and behaviors real time as we sense them. I gather this is the particular sense in which a particular flower should "look like" what it is... we can feel the flower's shape, using our proprioception and tactile senses to allocate the flower into space, and when we look at the flower its shape should be allocated into space the same way we pieced together that it should be using these other senses.

    So is it just sight's ability to affix objects remotely and precisely in space that is what's being discussed here? Analogous to how a flower should look like what it is, shouldn't it also "echo-locate" like what it is to entities that use high precision echo-location?
  • Mary vs physicalism
    A physical description of photons would suffice if physical stuff actually exists. If idealism is true, however, describing photons as physical things that exist independent of mind(s) would be false. A physical description of photons can only work if A) idealism isn't the case and B) they're not conscious.RogueAI
    So if a mind can give a complete description of a photon, then the photon is independent of the mind. But if the mind cannot give a complete description of a photon, then a photon must be dependent on a mind. Something like this?
  • Mary vs physicalism
    Are photons conscious?Marchesk
    I'll say no. Will you answer my question now?
  • Mary vs physicalism
    I'm saying that a purely physical description of pain is incomplete.RogueAI
    Is a purely physical description of a photon complete?
  • Mary vs physicalism
    It's just not part of the function of sight.frank
    Let me phrase it this way. Imagine we make a robot driver that will stop at a red light; we need not add experience to the robot. By comparison, I'm a human, and being a good driver, if I see a red light, I'll stop at the light.

    Assuming we buy this, it's clear that experience is not necessary for sight, if by sight we mean to include what the robot is doing. But what's not so clear is that if I stop at a red light that I'm not stopping because I experienced red; that were it not for that experience, I would not have stopped.
    That is, can we pinpoint a difference in the structure or functioning of the brain of a person who knows how to ride a bike from the brain of a person who doesn't?Srap Tasmaner
    I get that... but Mary's Room doesn't really address this very point. We could say that physicalism predicts there would be a physical difference in the brain. But it's a physical difference resulting from a physically different scenario... so physicalism would be viable if "knowing-that" mechanisms are insufficient to establish arbitrary states of the brain that "actual going" establishes.

    MR simply posits that knowing-that's are all of physical knowledge, then notices that being-able isn't a know-that, and concludes that the know-that cannot be of the type "all physical knowledge", but that's not convincing... the above presents precisely the scenario that allows physicalism's viability but betrays this argument. Under this scenario, the MR argument fails at the premise that all physical knowledge is acquired by know-that's, because "actual going"'s can be physically manifest, and "know-that"'s need not be physically exhaustive. If that makes sense.
  • Mary vs physicalism
    It would be difficult to make the case that functional sight entails the experience of sight.frank
    But it would be trivial, and tautological in a meaningless sense, to say that functional sight excludes the experience of sight. Words are boxes, and boxes are flexible. All you have to do is erase any consequence of experiencing from your box of "sight". We could build functional mimics... robots with cameras... and have them perform tasks that require sight but not experience. We can draw our "sight" box this way; it's what that robot would do. Since we can do this, and since boxes are arbitrary, I can easily upgrade your "difficult" to "impossible".

    But by contrast, there's the part of my post you didn't comment on... you're describing on this forum something you call "experience of sight". In describing it, you're typing a word: "E" followed by "X", followed by "P", followed by another "E", and so on. If the thing you're talking about exists, given you're talking about it, then it must have the functional consequence that in talking about it you typed "E" followed by "X" followed by "P" followed by another "E", and so on.

    So it would be trivial to say that functional sight excludes what you're referring to by the phrase "experience of sight". But what would be difficult is to claim that you're talking about something real when you say "experience of sight", but that thing has no functional consequence.
  • Mary vs physicalism
    Remember, you don't need the experience of sight to have the functioning elements of sight.frank
    This would imply that the experience of sight is a non-functioning element of sight. But surely the experience of sight is at a minimum functionally necessary to describe the experience of sight; otherwise, how are we having this conversation? If epiphenomenally we are experiencing things, and it just so happens that physically our fingers are getting pressed in such a way as to say we're experiencing things, that would be quite a weird coincidence.
  • Mary vs physicalism
    Color is the object of her new knowledge.frank
    Mary cannot tell she's seeing red without first learning that what she is seeing is red.
  • Mary vs physicalism
    It's knowledge of color.frank
    Not until it gets associated with color.
  • Mary vs physicalism
    Her new knowledge isn't about brains or eyes.frank
    Insofar as it's new knowledge, it's necessarily knowledge about particular kinds of mental states. The question is, why can't those be brain states. Being a brain state does not entail being about brains, or about anything in particular for that matter.
  • Mary vs physicalism
    I'm not following you, sorry.frank
    Try this... Mary is not really learning anything about "red" (the Jane/Joe/LED thing); she is learning something about her experiencing.

    Now let's wear the physicalist hat and explain this. When Mary experiences the LED, Mary gets into a specific set of physical states.

    So this in my estimation is what you're presuming to be refuting with Mary's room argument, and that is where I think the problem comes in. MR doesn't genuinely refute this. Instead, all it winds up doing is confusing Mary knowing about states with Mary being in states.
  • Mary vs physicalism
    She has knowledge of something that isn't physical.frank
    ...as opposed to knowledge of something physical. If it's physical, it would likely be a set of states Mary has.
    If knowledge is JTB or some other internalist interpretation, then it looks like we'd have to say she learned about something non-physical.frank
    ...or some set of physical states of Mary.
    You have to understand the argument before you try to refute it. You're doing neither.frank
    You're defaulting on the question before you. You've said twice that this should be something non-physical. How do you rule out that this is physical?

    Presumably the argument rules this out. But the argument basically compares a Mary that has never been exposed to a particular kind of stimulus, to a Mary that has been. But if Mary is learning something abstract about states of Mary that are only induced when exposed to that particular kind of stimulus, it could easily be physical. So how do you rule this out?

    It's a question that very directly follows from the claim you made twice already.
  • Mary vs physicalism
    Right. That's all you need.frank
    Non-physical means not physical; it does not mean novel. It appears you're using "novel" to establish that this is not physical. That does not seem sufficient to establish that very thing.

    ETA: There's a state Mary gets into when being exposed to a 750nm LED. Suppose that state cannot be induced by telling Mary all about 750nm LED's. Show that this state must therefore be non-physical.
  • Mary vs physicalism
    We would assume she already had the ability to see red, there was just none in the environment.frank
    With a little more precision, let's assume indeed Mary had the ability to see red. By that I mean that if Mary sees a 750nm LED glowing, then Mary has "experience x". Suppose Mary can also see green: if Mary sees a 550nm LED glowing, then Mary has "experience q".

    Mary has a peer, Jane, with an inverted spectrum wrt her. If Jane sees a 750nm LED, Jane has experience q; if Jane sees a 550nm LED, Jane has experience x. There's another peer, Joe, who is a protanope. If Joe sees a 750nm LED, Joe has experience y. If Joe sees a 550nm LED, Joe has experience y (we'll just be fuzzy enough to say these are the same).

    So now Mary walks out of the room and sees a 750nm LED. To be very precise here, Mary does not know what type of LED this is.

    Now let's add in the other presumption that we're already presuming:
    It should be a no-brainer that she learned something new.frank
    Mary learned something new. Okay, but what? Mary can't use what she learned to imply anything other than that she had a novel experience.
    If knowledge is JTB or some other internalist interpretation, then it looks like we'd have to say she learned about something non-physical.frank
    What forces us to say she learned about something non-physical? If we're physicalists, Mary is physical. Mary learned something about something physical. Mary didn't even learn anything about red... not yet.
  • Mary vs physicalism
    This is where you slip up I'm afraid.TheMadFool
    Nonsense.
    This is exactly what's up for debate.TheMadFool
    I want to pause here and take note of something very specific. The claim under scrutiny is whether physicalism is challenged by this or not.
    Is experiencing red completely physical or not? That, my friend, is the question.TheMadFool
    Not in my mind it isn't. This is about whether Mary's room challenges physicalism; not whether physicalism is true or not.
    You can't assume what needs to be proven unless you want to run around in circles.TheMadFool
    Ah, but you can do exactly that... if your goal is to answer the question of whether Mary's room challenges physicalism. If a presumption of physicalism is not challenged by Mary's room, then Mary's room does not challenge physicalism.

    Even so, I quite honestly do not see it as controversial that it's physically different for Mary to look at something red versus say reading about it. In fact, the whole Mary's room scenario is explicitly set up around Mary not being physically exposed to red until after she has learned about vision. Are you claiming it's actually controversial that there's a physical difference here?
  • Mary vs physicalism
    It takes time to understand these things.TheMadFool
    It's kind of presumptuous to diagnose disagreements. You should just state your business, not theorize what you think is wrong with me such that I dare disagree with you.
    In the bodily and mental activity of seeing red, is the mind not involved?TheMadFool
    The mind is involved when you ride a ship to the moon. Surely Neil had quite an astounding experience. There's an argument to be had that Neil's experience of going to the moon is still physical, and knowing everything physical about Neil's experience is either not equivalent to going to the moon, or requires going to the moon.

    Surely you can picture the dramatic difference between sitting in your chair and pondering a 480,000 mile trip, and actually going 480,000 miles. But that's a physical difference, right?

    Mary's not all that different. Knowing about going to the moon can be interpreted in the same two ways; either it requires Mary actually experience seeing red, which is physical (it's different but physically different for Mary to read about seeing red in a book and for Mary to see red monochromatic light), or does not require it, in which case it can be novel for Mary to see red versus know how to see it in the same exact way it could be novel for Mary to go to the moon versus know everything about going to the moon.
  • Mary vs physicalism
    khaled's objection isn't valid because the thought experiment specifically mentions Mary knows everything physical.TheMadFool
    You're confused. khaled's objection is valid because the thought experiment specifically mentions Mary knows everything physical. If I know everything about how Neil Armstrong landed on the moon, would that mean I'd need a space suit? Or would we have proven something non-physical since actually being on the moon leads to my suffocating, but presuming I know everything about landing on the moon doesn't require me to suffocate? Both of these are kind of ridiculous.

    So why should Mary likewise knowing everything physical about seeing red be expected to "be on the moon"... to actually be seeing red? When Mary sees red, she "goes to the moon". Mary knowing about red is simply "knowing how Neil Armstrong landed on the moon"; it doesn't require a space suit.

    Unless, of course, by "knowing everything physical" about going to the moon, you mean Mary actually goes to the room... in which case, she saw red.
  • What Mary Didn't Know & Perception As Language
    But at minimum, Mary learns what it's like for her to experience red.hypericin
    Sure, probably. But another possibility would be that Mary doesn't so much "learn" what it's like for her to see red, as she "develops a way for her to see red" and learns what that developed way is... the difference being there's no "what it's like for Mary to see red" until she develops the ability to do so.

    ...just mentioning that to cover bases.
  • What Mary Didn't Know & Perception As Language
    Mary learns what it is like to experience red.hypericin
    Is it not that simple?hypericin
    Not really. Let's define 750nm monochromatic light as red (monochromatic is key in the definition; and what we really mean is that only 750nm light is there in the visual spectrum; it's okay if 540am is broadcasting in your area).

    Let's furthermore suppose that Jane has an inverted spectrum wrt you; and Joe is a protanope. Then by definition, you, Jane, and Joe all see red when you look at 750nm. But your experience is quite different than Jane's and Joe's; and Joe cannot see that big of a difference between 750nm and 550nm. So what exactly is "what it is like to experience red"? We might could talk about what it's like for you when you see red, and what it's like for Jane when Jane sees red, and what it's like for Joe when Joe sees red, but it's not so clear there is a generic "what it's like to see red". We could hypothesize that there probably is, but only fuzzily, and we can't actually use that unless we have some way to test it. Best we can say is that Jane and you distinguish colors the same way, and agree on what things are red enough to apply a label to it. And whether or not Joe "sees red" depends on what you mean; he certainly sees a 750nm diode emitting light.
  • What Mary Didn't Know & Perception As Language
    White is a mixture of, at the very least, red, blue, and green. Each has a specific wavelength.TheMadFool
    How is that minimal? You can make white by mixing two wavelengths; you're using three, a whole extra wavelength beyond the requirement! Also, didn't you just say red was 750nm light? When you mix 750nm light with something, you still have 750nm light. Is such white red then?

    Mary has to have a word with you. Your definition of red is wrong. If you program a robot to be sensitive to 750nm light, and have it use that to show you what is and isn't red, it will give you wrong answers constantly.
  • What Mary Didn't Know & Perception As Language
    Red is light with a wavelength of 750 nm.TheMadFool
    Fill in the blank. White is light with a wavelength of ___ nm.
  • The important question of what understanding is.
    Why?TheMadFool
    It's not really the same thing, in short. Language does more than what perception does, and perception does more than what language does. They deserve different concepts. I don't think I want to elaborate here; I haven't bothered with the other thread yet (and once I do, I might just lurk, as I typically do way more often than comment).
  • The important question of what understanding is.
    Before we go any further, what do you think of the idea that perception is a language?TheMadFool
    It might work as a metaphor, but I wouldn't go further than that.
  • The important question of what understanding is.
    Joe's knowledge that red is 750 nm,TheMadFool
    There's language translation, and there's wrong. What color is a polar bear, Santa's beard, and snow?
    If I say out loud to you "seven" and then follow that up by writing "7" and showing it to you, is there any difference insofar as the content of my spoken and written message is concerned?TheMadFool
    Your thought experiment is misguided. 7 is a number. Seven is another name for the number 7. But 7 aka seven is not a dwarf. There might be seven dwarves, but seven isn't a dwarf.
    Likewise, seeing the actual color red is equivalent to knowing the number 750 (nm) - they're both the same thing and nothing new is learned by looking at a red object.TheMadFool
    Seeing the actual color red is not equivalent to knowing the number 750nm. Colors are not wavelengths of light; wavelengths of light have color (if you isolate light to said wavelength photons and have enough to trigger color vision), but a wavelength of light and a color aren't the same thing. A polar bear is white, not red (except after a nice meal), despite his fur reflecting photons whose wavelength is 750nm. There's no such thing as a white photon. White is a color. Colors are not wavelengths of light.

    Joe also sees a color, in a color space we don't tend to name (because we're cruel?), when he sees 750nm light. But the color he sees is pretty much the same color as 550nm light. We call the former red, and the latter green.
  • The important question of what understanding is.
    I recall mentioning this before but what is red? Isn't it just our eyes way of perceiving 750 nm of the visible spectrum of light?TheMadFool
    Eyes do not perceive, so the answer to the question is no (I'm sure you didn't literally mean that eyes perceive, but you have to be specific here enough for me to know what you did mean).

    Color vision in most humans is trichromatic; to such humans, 750nm light would affect the visual system in a particular way, that contrasts quite a bit from 550nm light. The tristimulus values for each would be X=0.735, Y=0.265, Z=0 and X=0.302, Y=0.692, Z=0.008 respectively. A protanope would be dichromatic; the protanope's visual system might have tristimulus distimulus values for each color as X=1.000, Y=0.000 and 550nm light as X=0.992, Y=0.008.

    Assuming Jack is typical, Jane has an inverted spectrum, and Joe is a protanope, Jack and Jane agree 750nm light is red and 550nm light is green; and Joe doesn't quite get what the fuss is about.
    A computer doesn't experience anything. All the information you and I have ever acquired has come from experience.Daemon
    Imagine a test. There are various swatches within 0.1 units of each other from X=0.735, Y=0.265, Z=0; and this is mixed in with various swatches within 0.1 units from X=0.302, Y=0.692, Z=0.008. Jack, Jane, Joe, and a robot affixed with a colorimeter are tasked to sort the swatches of the former kind together and the swatches of the latter kind together into separate piles. Jack, Jane, and the robot would be able to pass this test. Joe will have some difficulty.

    Jack and Jane do this task well using their experiences of seeing the swatches. Joe will have great difficulty with this task despite experiencing the swatches. The robot can be programmed to succeed at this test with success rates rivaling Jack and Jane, despite having no experiences.

    I'll grant that all of the information Jack, Jane, and Joe have ever acquired has come from experience. I'll grant that the robot here does not experience. But granting this, with regard to this test, Joe's the odd one out, not the robot.

    Maybe Jack, Jane, and Joe only being able to sort swatches using their experiences does not demonstrate that experience is the critical thing necessary to sort swatches correctly.
  • The important question of what understanding is.
    A robot is not an individual, an entity, an agent, a person.Daemon
    Just a quick reminder... we're not talking about robots in general. We're talking about a robot that can manage to go to the store and get me some bananas.

    I don't believe such a robot can possibly pull this off (with any sort of efficacy) without being an individual, an entity, or an agent.

    But, sure... it need not be a person.

    I suspect that your concept of individuality/agency drags in baggage I don't myself drag in.
    Of course in everyday conversation we talk as though computers and robots were entities, but here we need to be more careful.Daemon
    Okay, so let's be careful.
    To say that a robot is shopping is a category error.Daemon
    You could say that the robot is simulating shopping.Daemon
    Imagine this theory. Shopping can only be done by girls. I say, that's utter nonsense. Shopping does not require being a girl; I'm a guy, and I can certainly pull it off. But the objection is raised that it's a category error to claim that a guy can shop; you could say that I am simulating shopping.

    I don't quite buy that said argument counts as being careful. I'm certainly, in this particular hypothetical scenario, not committing a category error simply by claiming that I, a guy, can shop; it's not me that's claiming only girls shop. In fact, the suggestion that an event that actually occurs should be considered a simulation seems to raise red flags to me.

    This sounds like exactly the opposite of being careful.

    ETA: You've managed to formulate a theory that makes unjustified distinctions. There's now real shopping, where real bananas get put into real carts, money from real accounts change hands, and the real bananas are brought to me; and then there's simulated shopping, where all of that stuff also happens, but we're missing vital ingredient X. There's by this logic real walking, where one manages to perform a particular choreography of controlled falling in such a way as to invoke motion without falling over, and simulated walking, where all of this stuff happens, but you're not doing it with the right stuff. There's real surgery, where a surgeon might slice me open with a knife, remove a tumor, and sew me up while managing not to kill me; and simulated surgery, where all of this stuff happens... the tumor's still removed, I'm still alive... but the thing slicing me open didn't quite have feels in the right way.

    It seems to me there's no relevant difference here between the real thing and what you're calling a simulation... which is also the real thing, but is missing the ingredient you demand the real thing requires to call it real. All of this stuff still gets done... so to me, this is the ultimate test demonstrating that the thing you demand must be there to do it isn't in fact necessary at all. Are you sure you want this to be your standard that vital ingredient X is necessary? Because it sounds to me like this is the very definition of ingredient X not being vital.

    A genuine argument for ingredient X's vitality should not look like a True Scotsman fallacy. If experience isn't doing any work for you to explain something crucial about understanding, it's as I said superfluous... and your inclusion of it just to include it is simply baggage. If you have a good reason to suspect experience is necessary, that is what you should present; not just a narrative that lets you say that, but an explanation for how it critically fits in.
    Do you think the robot understands what it is doing?Daemon
    In a nutshell, yes. But again, to be clear, this does not stem from a principle that doing things is understanding. Rather, it's because this is precisely the type of task that requires understanding to do with any efficacy.
  • The important question of what understanding is.
    To understand shopping, you would need to have experienced shopping.Daemon
    Pain is a feeling. Shopping is an act.

    If I see a person walking through the store, looking at various items, picking up some of them and putting them into the cart, the person is shopping. If I see a robot walking through the store, looking at various items, picking up some of them and putting them into the cart, the robot is shopping. It's hard to say what a robot feeling pain is by comparison, but that being all shopping is, that robot is shopping.

    Also, are you implying nobody knows what my question means unless they have bought me bananas? (Prior to which, they have not experienced buying me bananas?)
  • The important question of what understanding is.
    I am not saying that experience is the explanation for understanding, I am saying that it is necessary for understanding.

    To understand what "pain" means, for example, you need to have experienced pain.
    Daemon
    Your example isn't even an example of what you are claiming, unless you seriously expect me to believe that you believe persons with congenital analgesia cannot understand going to the store and getting bananas.

    There's a gigantic difference between claiming that X is necessary for understanding, and claiming that X is necessary to understand X.

    ETA: Your claim is that experience is necessary for understanding. I interpret this claim as equivalent to saying that there can be no understanding without experience. The expected justification for this claim would be to show how understanding at all necessarily involves experience (because if it doesn't, the claim is wrong). This is quite different than pointing out areas of understanding that require experience (such as your pain example).

    Explain to me, for example, how you connect the requirement of experience to the example question requesting some bananas.
  • The important question of what understanding is.
    When you experience through your senses you see, feel and hear.Daemon
    And yet, Josh (guessing) does not understand Sanskrit, and you do not understand understanding. A person who does not understand something does not understand it. I shouldn't need to be telling you this.

    You've convinced yourself that experience is the explanation for understanding. The problem is, experience does not explain understanding. A large number of animals also experience; but somehow, only humans have mastered human language. Experience cannot possibly be the explanation for understanding if it isn't even an explanation of understanding.
  • The important question of what understanding is.
    You can redefine "understanding" in such a way that it is something a robot or a computer can do, but the "understanding" I am talking about is still there.Daemon
    The concept of understanding you talked about on this thread doesn't even apply to humans. If "the reason" the robot doesn't understand is because the robot doesn't experience, then the non-English speaker that looked at me funny understood the question. Certainly that's broken.

    I think you've got this backwards. You're the one trying to redefine understanding such that "the robot or a computer" cannot do it. Somewhere along the way, your concept of understanding broke to the point that it cannot even assign lack of understanding to the person who looked at me funny.
  • The important question of what understanding is.
    We're not trying to explain how you get bananas, we're trying to explain understanding.Daemon
    "Can you go to the store and pick me up some bananas?"InPitzotl
    My suggestion is that understanding something means relating it correctly to the world, which you know and can know only through experience.

    I don't mean that you need to have experienced a particular thing before you can understand it, but you do need to know how it fits in to the world which you have experienced.
    Daemon
    A correct understanding of the question is comprised of relating it to a request for bananas. How this fits in to the world is how one goes about going to the store, purchasing bananas, coming to me and delivering bananas. You've added experiencing in there. You seem too busy trying to compare CAT tools not understanding and an English speaker understanding to relate understanding to the real test of it: the difference between a non-English speaker just looking at me funny and an English speaker bringing me bananas.

    So what you've tried to get me to do is accept that a robot, just like a CAT tool, doesn't understand, even if the robot brings me bananas; and the reason the robot does not understand the question is because the robot does not experience, just like the CAT tool. My counter is that the robot, just like the English speaker, is bringing me bananas, which is exactly what I meant by the question; the CAT tool is just acting like the non-English speaker, who does not understand the question (despite experiencing; surely the non-English speaker has experienced bananas, and even experiences the question... what's missing then?). "Bringing me bananas" is both a metaphor for what the English speaker correctly relates the question to that the non-English speaker doesn't, and how the English speaker demonstrates understanding the question.
  • The important question of what understanding is.
    As I emphasised in the OP, experience is the crucial element the computer lacks, that's the reason it can't understand.Daemon
    Nonsense. There are people who have this "crucial element", and yet, have no clue what that question means. If experience is "the crucial" element, what is it those people lack?

    I don't necessarily know if a given person would understand that question, but there's a test. If the person responds to that question by going to the store and bringing me some bananas, that's evidence the person has understood the question.
    The same applies to robots.Daemon
    But in order to understand your question, the person must have experienced stores, picking things up, bananas and a multitude of other things.Daemon
    Your CAT tool would be incapable of bringing me bananas if we just affix wheels and a camera on it. By contrast, a robot might pull it off. The robot would have to do more than just translate words and look up definitions like your CAT tool does to pull it off... getting the bananas is a little bit more involved than translating questions to Dutch.
    Neither my CAT tool nor a robot do what I do, which is to understand through experience.Daemon
    Neither your CAT tool nor a person who doesn't understand the question can do what a robot who brings me bananas and a person who brings me bananas do, which is to bring me bananas.

    I'm not arguing that robots experience things here. I'm arguing that it's a superfluous requirement. But even if you do add this superfluous requirement, it's certainly not the critical element. To explain what brings me the bananas when I ask for them, you have to explain how those words makes something bring me bananas. You can glue experience to the problem if you want to, but experience doesn't bring me the bananas that I asked for.
  • The important question of what understanding is.
    I'm detecting a few problems here.
    A robot does not "encounter" things any more than a PC does. ...Daemon
    The question isn't about experiencing; it's about understanding. If I ask a person, "Can you go to the store and pick me up some bananas?", I am not by asking the question asking the person to experience anything. I am not asking them to be consciously aware of a car, to have percepts of bananas, to feel the edges of their wallet when they fish for it, etc. I am asking for certain implied things... it's a request, it's deniable, they should purchase the bananas, and they should actually deliver it to me. That they experience things is nice and all, but all I'm asking for is some bananas.
    When we encounter something, we experience it, we see it, feel it, hear it.Daemon
    I disagree with the premise, "'When humans do X, it involves Y' implies X involves Y". What you're asking me to believe is in my mind the equivalent of that asking "Can you go to the store and pick me up some bananas?" is asking someone to experience something; or phrased slightly more precisely, that my expectations that they understand this equate to my expectations that they (consciously?) experience things. And I don't think that's true. I think I'm just asking for some bananas.

    The other problem is that you missed the point altogether to excuse a false analogy. A human doesn't learn language by translating words to words, or by hearing dictionary definitions of words. It's kind of impossible for a human to come to the point of being able to understand "Can you go to the store and pick me up some bananas?" by doing what your CAT tool does. It's a prerequisite for said humans to interact with the world to understand what I'm asking by that question.

    IOW, your point was that robots aren't doing something that humans do, but that's kind of backwards from the point being made that you're replying to. It's not required here that robots are doing what humans do to call this significant; it suffices to say that humans can't understand without doing something that robots do that your CAT tool doesn't do.
  • The important question of what understanding is.
    There's no significant difference.Daemon
    There absolutely is a significant difference. How are you going to teach anything, artificial or biological, what a banana is if all you give it are squiggly scratches on paper? It doesn't matter how many times your CAT tool translates "banana", it will never encounter a banana. The robot at least could encounter a banana.

    Equating these two simply because they're programmed is ignoring this giant and very significant difference.
  • The important question of what understanding is.
    With a robot, we know that what looks like understanding isn't really understanding, because we programmed the bloody robot. My translation toolDaemon
    I'm a bit confused here. Is your translation tool a robot?
  • The important question of what understanding is.
    But it does at least exist as an entity, which is a prerequisite for understanding.Daemon
    Why is it a prerequisite?
  • True or False logic.
    Person A may say "Trump is an asshole" but can not say "Trump is a nice guy" at the same time because that would be a contradiction within the framework.Hermeticus
    That's not a great example. Let's say person A does indeed say:
    "Trump is an asshole, and is a nice guy."

    Does that mean Person A said something contractory? Nope, and that's SolarWind's point. Ah, but he did make a contradiction, in some framework. Okay, but is Person A using that framework? All you can say is essentially, Person A contradicted himself, if he was in a framework where that's a contradiction.
  • True or False logic.
    1. True
    2. True
    3. False
    TheMadFool
    Corrected.
  • True or False logic.
    or neither true or false at the same time?TiredThinker
    If x is a cat, it can't be not a cat. [law of noncontradiction, law of the exclused middle, XOR].
    For any proposition p,
    Either p is true OR p is false [principle of bivalence]
    TheMadFool
    1. Item number 2 is true
    2. The number of true statements in this list is not 2.
    3. Puppies are evil co-conspirators with aliens from Haley's comet secretly scheming to steal your precious bodily fluids.

    3 cannot be false; because if it is, 2 can neither be true nor false, and 1 be neither true nor false. That violates the principle of bivalence. Therefore, beware the puppies.
  • Fitch's paradox of Knowability

    Understood, but the best I can do is link to context. @TheMadFool mentioned it, but there was something silly there with Fitch only applying to true propositions (follow it to see why I'm saying it's silly).

    The furthest I can take you is there; I cannot go inside TMF!