Comments

  • Blindsight's implications in consciousness?
    Cognitive theorists conclude from clinical examples of blindsight that consciousness is only a part of what goes on in the brain, and that consciousness is not needed for behavior.Joshs

    Are you agreeing or disagreeing with them?
  • Blindsight's implications in consciousness?
    Why do you think someone else can then?
  • Blindsight's implications in consciousness?
    Can you perceive visually through your entire body?
  • Blindsight's implications in consciousness?
    I'm a bit puzzled how intentionality fits in here, how it could be an illusion. Can you give a practical example?
  • Consciousness, Evolution and the Brain's Activity
    Are our brains evolutionarily hardwired to actually "take notice" of what enters our sphere of consciousness? Or does it process information as if consciousness weren't at all present?tom111

    The brain doesn't work by processing information. It works through electro-chemical processes. When you've described those processes, you've said it all. There isn't anything for "information" to do. "Information" is a way of talking about the processes, it can be used to quantify the processes, but it doesn't play a part in the processes.

    Do you want to rephrase your question?
  • Solution to the hard problem of consciousness
    We don't know exactly how consciousness arises. There's no point in making stuff up.
  • Solution to the hard problem of consciousness
    Sorry, I think it's a lot of scientific-sounding words stuck together without any rationale or practical basis.
  • Solution to the hard problem of consciousness
    I agree with you that we have to give meaning to machines. But not at the level you suggest (assigning 0 or a 1 to a voltage range), because it wouldn't help.GrahamJ

    But the observer dependency applies at all levels of computation.

    You have the physical machine, the steel, copper, silicon, the electric currents, all those are observer independent, in the sense that they are what they are whatever anybody says or thinks about them.

    That the machine is carrying out computation is observer-dependent. We ascribe meanings to the mechanisms, but the machine doesn't thereby take on any meaning.

    AI researchers give meaning to their machines...GrahamJ

    Nothing about the physical machine changes when we ascribe meaning to the mechanisms.

    We ascribe meaning to the voltage ranges in the logic gates, and at the other end we ascribe meaning to the output, which is what you are doing now as you ascribe meaning to the pixels appearing on your screen.

    The voltages and the pixels don't have any meaning for the machine itself.
  • Solution to the hard problem of consciousness
    Try putting the essence of it in a few sentences.
  • Solution to the hard problem of consciousness
    Read at least the OP of my thread Matter and Qualitative PerceptionEnrique

    I read some of it when it first appeared, I looked again at it now, it's incoherent.
  • Solution to the hard problem of consciousness
    It seems that self-awareness, thoughts and behaviour are made of complex information processingGrahamJ

    Hi GrahamJ.

    That does seem to be a widely held belief, but I think it is a misunderstanding, in fact I feel I know it is a mistake, with almost a mathematical certainty. So I find it interesting.

    Here's one way of showing what I mean by "almost a mathematical certainty".

    Input Voltages for Logic Gates

    Logic gate circuits are designed to input and output only two types of signals: “high” (1) and “low” (0), as represented by a variable voltage: full power supply voltage for a “high” state and zero voltage for a “low” state. In a perfect world, all logic circuit signals would exist at these extreme voltage limits, and never deviate from them (i.e., less than full voltage for a “high,” or more than zero voltage for a “low”).

    However, in reality, logic signal voltage levels rarely attain these perfect limits due to stray voltage drops in the transistor circuitry, and so we must understand the signal level limitations of gate circuits as they try to interpret signal voltages lying somewhere between full supply voltage and zero.
    Voltage Tolerance of TTL Gate Inputs

    TTL gates operate on a nominal power supply voltage of 5 volts, +/- 0.25 volts. Ideally, a TTL “high” signal would be 5.00 volts exactly, and a TTL “low” signal 0.00 volts exactly.

    However, real TTL gate circuits cannot output such perfect voltage levels, and are designed to accept “high” and “low” signals deviating substantially from these ideal values.

    “Acceptable” input signal voltages range from 0 volts to 0.8 volts for a “low” logic state, and 2 volts to 5 volts for a “high” logic state.

    “Acceptable” output signal voltages (voltage levels guaranteed by the gate manufacturer over a specified range of load conditions) range from 0 volts to 0.5 volts for a “low” logic state, and 2.7 volts to 5 volts for a “high” logic state:
    All About Circuits Textbook

    So right at the very start of the design and construction of the machine, we determine arbitrarily what shall count as a I and what stands for 0. There are all sorts of things going on in those physical logic gates, they are emitting heat, magnetic fields, perhaps vibrations, and they are in electrical circuits which pass through for example the computer fan but also the local power station, but we deem an arbitrary range of voltages in arbitrary locations to be the relevant processes.

    Another way of looking at this is to say that the "information processing" in a digital computer is "observer-dependent": it's only doing computation because we say it is. Other examples of observer- dependent phenomena include money and marriage. Metals, mice and mountains are examples of observer-independent phenomena.

    The mind, thought, feelings and emotions are observer-independent phenomena. You are thinking, you have a mind, you have feelings and emotions, regardless of what any outside observer says about it.

    So what is going on in a digital computer is very different to what goes on in the brain/mind. Our computers are astoundingly clever inventions, but they have nothing to do with our conscious experience.

    The brain isn't a digital mechanism. As well as the activity at synapses the brain is bathed with slower-acting chemicals whose effects are felt over periods of hours or days. Also there are synchronised waves of activity travelling around the brain.

    Finally it makes no real sense to think about the brain in isolation from the whole body and the sensory input it provides, and also the world beyond the body, to which we are connected through our senses, in a way the computer is not.
  • Solution to the hard problem of consciousness
    For me, the hard problem of consciousness is about feelings. Feelings are physical pains and pleasures, and emotions, though when I say emotions, I only mean the experience of feeling a certain way, not anything wider, such as 'a preparation for action'.

    My preferred definition of consciousness is subjective experience. The unemotional content of subjective experience includes awareness of the environment and the self-awareness, all sorts of thoughts, but no emotional content. I am quite happy to follow Dennett as far as the unemotional content of subjective experience is concerned: that is just what being a certain kind of information processing system is like, and there is nothing more to explain.
    GrahamJ

    Could you say more about why you distinguish emotions from the other aspects of experience?

    Could you give some examples of thoughts with no emotional content?
  • Solution to the hard problem of consciousness
    The tree still exists. But not in colors and shape and felt structure. Like the sound of thunder is still there if no one hears it.Cartuna

    Isn't sound equivalent to colour here? Sound involves your eardrums and your brain.
  • The important question of what understanding is.
    The two (mind you, not one) experiencing entities are a result of corpus callosotomyInPitzotl

    I don't understand why you are telling me that, as if it was a point against me.

    This person did not have a corpus callosotomy; she had a stroke (see article). It is very obvious that she has AHS. That's also curious... why is it obvious? What behaviors is she exhibiting that suggest AHS?InPitzotl


    I don't understand why you are asking me that.
  • The important question of what understanding is.
    So if experience is what makes us an entity, how could that possibly happen?InPitzotl

    Two experiencing entities.

    But I don't really think the effects of cutting the corpus callosum are as straightforward as they are sometimes portrayed, for example, the person had a previous life with connected hemispheres.

    But also there's a more fundamental point that I don't believe you have addressed, which is that a robot or a computer is only an "entity" or a "system" because we choose to regard it that way.Daemon

    I don't think the "corpus callosum/AHS" argument addresses this.
  • The important question of what understanding is.
    You're correct to point out that I ignored your discussion of AHS etc, apologies. I ignored it because it doesn't stand up, but I should have explained that.

    Cutting the corpus callosum creates two entities.
  • The important question of what understanding is.
    This feature makes agents tolerant against the unpredicted. The thermostat is missing this.InPitzotl

    I've looked at this many times, and thought about it, but I just can't see why you think it is significant. Why do you think being tolerant against the unpredicted makes something an agent, or why do you think being tolerant against the unpredicted means that the robot is trying.

    But also there's a more fundamental point that I don't believe you have addressed, which is that a robot or a computer is only an "entity" or a "system" because we choose to regard it that way. The computer or robot is not intrinsically an entity, in the way you are.

    I was thinking about this just now when I saw this story "In Idyllwild, California a dog ran for mayor and won and is now called Mayor Max II".

    The dog didn't run for mayor. The town is "unincorporated" and doesn't have its own local government (and therefore doesn't have a mayor). People are just pretending that the dog ran for mayor.

    In a town which was incorporated and did have a mayor, a dog would not be permitted to run for office.
  • What is Nirvana
    Buddhists generally would not accept that.Wayfarer

    What I've noticed is, if you ask two Buddhists any challenging question about their religion, you get four or more contradictory answers.
  • Interpreting what others say - does it require common sense?
    I used to work as an accident investigator, it was striking how often people talked about common sense (and the lack of it) after accidents. I began to think maybe common sense isn't so common. Also how do you tell if somebody has common sense? I concluded that it isn't a useful concept!
  • Interpreting what others say - does it require common sense?
    I know that interpretation of what other people say is context- and situation-dependent. But do you still need some common sense in order to correctly interpret what others say or write?Cidat

    I've been talking about this in the discussion on "understanding". I believe what you need to correctly interpret what others say is not so much common sense as common experience.
  • The important question of what understanding is.
    Thanks for the thoughtful and thought-provoking response InPitzotl.

    We might say you're an individual because you are "of one mind".InPitzotl

    That's broadly what I meant when I said that it's experience that makes you an individual, but you seem to think we disagree.

    For me "having a mind" and "having experiences" are roughly the same thing. So I could say having a mind makes you an individual, or an entity, or a person. Think about the world before life developed: there were no entities or individuals then.

    The robot is trying to buy bananas. But this is metaphorical language. — Daemon

    Not in the robot case. This is no mere metaphor; it is literally the case that the robot is trying to buy bananas.
    InPitzotl

    Suppose instead of buying bananas we asked the robot to control the temperature of your central heating: would you say the thermostat is only metaphorically trying to control the temperature, but the robot is literally trying? Could you say why, or why not?

    For me, "literally trying" is something only an entity with a mind can do. They must have some goal "in mind".

    It's as if meaning isn't about meaning any more; it's about meaning with experiences. Meaning without experiences cannot be meaning, even though it's exactly like meaning with experiences save for the definitive having the experiences part.InPitzotl

    The Oxford English Dictionary's first definition of "mind" is: "The element of a person that enables them to be aware of the world and their experiences".

    The word "meaning" comes from the same Indo-European root as the word "mind". Meaning takes place in minds.

    The robot has no mind in which meaning could take place. Any meaning in the scenario with the bananas is in our minds. The robot piggybacks on our understanding, which is gained from experience. If nobody had experienced bananas and shops and all the rest, we couldn't program the robot to go and buy them.
  • The important question of what understanding is.
    I'm not sure what "want" means to the precision you're asking.InPitzotl

    Because you're an experiencing being you know what "want" means. Like the cat you know what it feels like to want food, or attention.

    This is the Philosophy of Language forum. My contention is that a computer or a robot cannot understand language, because understanding requires experience, which computers and robots lack.

    Whether a robot can "try" to do something is not a question for Philosophy of Language. But I will deal with it in a general way, which covers both agency and language.

    In ordinary everyday talk we all anthropomorphise. The thermostat is trying to maintain a temperature of 20 degrees. The hypothalamus tries to maintain a body temperature around 37 degrees. The modem is trying to connect to the internet. The robot is trying to buy bananas. But this is metaphorical language.

    The thermostat and the robot don't try to do things in the way we do. They are not even individual entities in the way we are. It's experience that makes you an individual.
  • How can one remember things?
    Google "Jennifer Aniston cells".

    And here's a related discussion:

    https://jeffbowers.blogs.bristol.ac.uk/blog/grandmother-cells/
  • The important question of what understanding is.


    The cat wants something. The robot is not capable of wanting. The heat seeking missile is not capable of wanting.
  • The important question of what understanding is.
    Agentive action is better thought of IMO as goal directed than merely as "thought". In a typical case an agent's goal, or intention, is a world state that the agent strives to attain.InPitzotl

    You're wrong because the robot doesn't have a goal. We have a goal, for the robot.
  • The important question of what understanding is.
    But where do the goals come from, if not from "mere thought"? — Daemon

    In terms of explaining agentive acts, I don't think we care. I don't have to answer the question of what my cat is thinking when he's following me around the house. It suffices that his movements home in on where I'm going. That is agentive action. Now, I don't think all directed actions are agentive... a heat seeking missile isn't really trying to attain a goal in an agentive way.
    InPitzotl

    But a robot buying bananas is?
  • The important question of what understanding is.
    Agentive action is better thought of IMO as goal directed than merely as "thought".InPitzotl

    But where do the goals come from, if not from "mere thought"?
  • The important question of what understanding is.
    What you really mean isn't that the robot didn't interact with bananas, but that it "didn't count".InPitzotl

    What I really mean is that it didn't interact with anything in the way we interact with things. It doesn't see, feel, hear, smell or taste bananas, and that is how we interact with bananas.
  • The important question of what understanding is.
    It might be better to take a clearer case, as you drinking the coffee is agentive, which muddies the water a little. So we'll ask "why is it agentive when you go to the store for bananas, but not agentive when you exert a tiny gravitational pull on Jupiter".

    Agency is the capacity of an actor to act. Agency is contrasted to objects reacting to natural forces involving only unthinking deterministic processes.
  • The important question of what understanding is.
    That isn't a difficult question. Only conscious entities can be agentive, but not everything conscious entities do is agentive.
  • The important question of what understanding is.
    Your CAT tool doesn't interact with bananas.InPitzotl

    But neither does a robot.

    What is this "do" you're talking about? I program computers, I go to the store and buy bananas, I generate a particular body temperature, I radiate in the infrared, I tug the planet Jupiter using a tiny force, I shake when I have too much coffee, I shake when I dance... are you talking about all of this, or just some of it?InPitzotl

    I'm talking about acting as an agent. That's something computers and robots can't do, because they aren't agents. We don't treat them as agents. When your robot runs amok in the supermarket and tears somebody's head off, it won't be the robot that goes to jail. If some code you write causes damage, it won't be any good saying "it wasn't me, it was this computer I programmed". I think you know this, really.

    They aren't agents because they aren't conscious, in other words they don't have experience.
  • The important question of what understanding is.
    Both Siri and the kind stranger (seem to have) understood my question. A mini Turing Test.TheMadFool

    Yes. The Stanford Encyclopedia says that the Turing Test was initially suggested as a means to determine whether a machine can think. But we know how Siri works, and we know that it's not thinking in the way we think.

    When we give directions to a hotel we use a mental map based on our experiences.

    Siri uses a map based on our experiences. Not Siri's experiences. Siri doesn't have experiences. You know that, right?

    Me (to a stranger): Sir, can you give me the directions to the nearest hotel?

    Stranger (to me): Yeah, sure. Take this road and turn left at the second junction. There's a hotel there, a good one.
    TheMadFool

    Because the stranger understands the question in a way Siri could not, he is able to infer that you have requirements which your words haven't expressed. You aren't just looking for the nearest hotel, he thinks you will also want a good one. And he knows (or thinks he knows) what a good one is. Because of his experience of the world.

    That's what it's like when we think. We understand what "good" means, because we have experienced pleasure and pain, frustration and satisfaction.
  • The important question of what understanding is.
    You seem to be contradicting yourself. The other day you had a robot understanding things, now you say a computer doesn't know what a banana is.

    I've been saying from the start that computers don't do things (like calculate, translate), we use them to do those things.
  • The important question of what understanding is.
    I thought some examples of Gricean Implicature might amusingly illustrate what computers can't understand (and why):

    A: I broke a finger yesterday.
    Implicature: The finger was A's finger.

    A: Smith doesn’t seem to have a girlfriend these days.
    B: He has been paying a lot of visits to New York lately.
    Implicature: He may have a girlfriend in New York.

    A: I am out of petrol.
    B: There is a garage around the corner.
    Implicature: You could get petrol there.

    A: Are you coming out for a beer tonight?
    B: My in-laws are coming over for dinner.
    Implicature: B can't go out for a beer tonight.

    You can complete the remaining examples (a computer can't).

    A: Where is the roast beef?
    B: The dog looks happy.

    A: Has Sam arrived yet?
    B: I see a blue car has just pulled up.

    A: Did the Ethiopians win any gold medals?
    B: They won some silver medals.

    A: Are you having some cake?
    B: I'm on a diet.
  • The Essence Of Wittgenstein
    I haven't really familiarized myself with Wittgenstein in detail but from what I've gathered, I basically agree with everything he says about language.Hermeticus

    That's interesting, because from what I've gathered, there's a great deal of disagreement about what he means, among those who have really familiarised themselves with what he said.
  • The important question of what understanding is.
    As I tried to explain with Mary's room thought experiment, redness is just 750 nm (wavelength of red) in eye dialect. Just as you can't claim that you've learned anything new when the statement the burden of proof is translated in latin as onus probandi, you can't say that seeing red gives you any new information.TheMadFool

    What you've set out here is just one side of the disagreement about Mary's Room, but I am suggesting that not just red but everything you have learned comes from experience. Do you have a counter to that?
  • The important question of what understanding is.


    Mary's deficit in the room is only that she hasn't seen red. Apart from that she is a normal experiencing human being.

    A computer doesn't experience anything. All the information you and I have ever acquired has come from experience.