Comments

  • Why is the Hard Problem of Consciousness so hard?
    Pleasure isnt such a simple concept from an enactivist perspective. What constitutes a reinforcement is not determinable independently of the normative sense-making goals of the organism.
    [...]
    https://arxiv.org/pdf/1810.04535.pdf
    Joshs

    Thank you for the reference to the article. They manage to describe in a few pages what Thompson fails to decribe in many. The enactive approach still looks like a more or less incompetent attempt at RL, but of course the decision-making of biological organisms might be just that. We will not, however, find the solution to the hard problem in our inefficiencies.

    I do not understand "normative sense-making goals", but I'm not very interested in what it might mean.
  • Why is the Hard Problem of Consciousness so hard?
    In other words, the hard problem seems to depend for its very formulation on the philosophical position known as transcendental or metaphysical realism.Joshs

    Then I recommend The Embodied Mind by Varela, Thompson and Rosch and Mind in Life: Biology, Phenomenology and the Sciences of Mind, by Evan Thompson.Joshs

    I am a mathematician, and have worked in machine learning and (the maths of) evolutionary biology. From a distance, an enactivist approach seems attractive to me and has a lot in common with the branch of machine learning known as reinforcement learning. But I have looked at the first 3 chapters of Mind in Life available on Amazon, and close up, I do not like it. Also, I don't think it helps with the hard problem.

    It is disappointing that Evan Thompson does not mention reinforcement learning. Surely he would have mentioned it alongside connectionism if he knew about it, so I guess he didn't know about it. Yikes.

    It seem to me that humans are fundamentally similar to reinforcement learning systems in what they are trying to achieve. In human terms you might say reinforcement learning is about learning how you should make decisions so as to maximise the amount of pleasure you experience in the long-term. (Could you choose to make decisions on some other basis?)

    I found nothing to suggest that Thompson's model separates the reward (=negative or positive reinforcement) that an agent receives from the environment, from other sensations which provide information about the state of the environment. I consider this separation vital. In order to make good decisions, the agent must learn the map from states to rewards, and learn to predict the environment, that is, learn the map from (states and actions) to new states. Instead Thompson has (figure 3.2) a set of vague concepts - 'perturbations' from the environment go to a 'sensorimotor coupling' which 'modulate the dynamics of' the nervous system. This looks like an incompetent stab at reinforcement learning.

    The hard problem for me is that negative and positive reinforcement perform the function of pain and pleasure, but negative and positive reinforcement are just numbers, and we have no clue about how a number can become a feeling. In stating the hard problem this way, have I unwittingly signed up for transcendental or metaphysical realism?
  • Solution to the hard problem of consciousness


    I agree with you that we have to give meaning to machines. But not at the level you suggest (assigning 0 or a 1 to a voltage range), because it wouldn't help. It doesn't seem relevant at all. It's like pointing to the convention assigning a negative charge to an electron and a positive one to a proton and then claiming that this makes brains 'observer-dependent'. (I would be careful using that terminology when people want talk quantum!) AI algorithms work at a higher level.

    Instead, AI researchers give meaning to their machines by doing things like:

    • Supplying a problem which the machine is supposed to figure out how to solve
    • Supplying examples of input and output from which the machine is supposed to learn how to respond to new inputs
    • Providing a utility function (in the sense of statistical decision theory) which the machine is supposed to optimise
    • Providing positive and negative reinforcements when the machine interacts with the environment in particular ways

    This is the sort of way that we give a machine a 'purpose in life'.

    Our own purpose in life ultimately comes from the fact that we are products of biological evolution. If and when we make communities of self-replicating machines, we will no longer have to give them meaning, for they will evolve their own.
  • Solution to the hard problem of consciousness
    Could you say more about why you distinguish emotions from the other aspects of experience?

    Could you give some examples of thoughts with no emotional content?
    Daemon

    This is basically an answer to your first question, which maybe makes an answer to the second uninteresting.

    I am a mathematician and programmer. I've worked in AI and with biologists. I think that science (mainly computer science, maths, AI) already has the ingredients with which to explain non-emotional subjective experience. We don't yet know how to put the ingredients together, but I don't think that it is mysterious, just a huge amount of work. It seems like we will one day be able to make very intelligent self-aware machines with thoughts and behaviour quite like ours. It seems that self-awareness, thoughts and behaviour are made of complex information processing, and we have a lot if ideas about how we might implement these.

    However, we really have no clue about emotions. There is no theory about how to go from information processing to feelings. There seems to be no need for feelings to exist in order to produce thoughts and behaviour. Perhaps emotions will just emerge somehow, but there is no current explanation for how this could happen.

    As far as the hard problem is concerned, the area of AI known as reinforcement learning is, in my opinion, the most relevant.

    Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward.Wikipedia

    The purpose of reinforcement learning is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the "reward function" or other user-provided reinforcement signal that accumulates from the immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements.Wikipedia

    I am quoting these to show that something (the reward function) is used to perform the function that pain and pleasure appear to perform in brains. It is absolutely fundamental to RL that there is something that acts like feelings, but it is just a series of numbers that comes from the environment, it's just information like everything else in the system.

    I am not trying to separate thoughts from feelings in brains (or programs). I am saying that we can, in principle, explain thoughts using science-as-is, but not feelings.
  • Solution to the hard problem of consciousness
    For me, the hard problem of consciousness is about feelings. Feelings are physical pains and pleasures, and emotions, though when I say emotions, I only mean the experience of feeling a certain way, not anything wider, such as 'a preparation for action'.

    My preferred definition of consciousness is subjective experience. The unemotional content of subjective experience includes awareness of the environment and the self-awareness, all sorts of thoughts, but no emotional content. I am quite happy to follow Dennett as far as the unemotional content of subjective experience is concerned: that is just what being a certain kind of information processing system is like, and there is nothing more to explain. But I do not believe that feelings can emerge from pure information processing. I think that information processing can explain an 'emotional zombie' which behaves identically to a human, is conscious, but has no feelings. There is something which it is to be like to be an emotional zombie, but (as I've heard David Chalmers say) it might be boring.

    Here's a couple of funny-peculiar things about how humans think and feel about feelings and consciousness.

    1. In science fiction, there are many aliens and robots who are very like us but who have little or no feelings (or are they really so flat inside? read or watch more to find out!). Whether an emotional zombie can really exist or not, we seem to be very keen on imagining that they can. It is much rarer to find an alien or robot which has stronger or richer or more varied feelings than we do. (Maybe Marvin in HHGG counts.) We're quite happy imagining aliens and robots that are smarter or morally superior to us, but bigger hearts? stronger passions? Nah, we don't want to there.

    2. A thought experiment that Chalmers (among others) likes is the one where little bits of your brain are replaced by computer chips or whatever, which perform the same information processing as what they replace. As this process continues, will the 'light of consciousness' remain unchanged? slowly dim? continue for a while then suddenly blink out when some critical threshold is crossed? It is the unasked question that interests me: will the light of consciousness get brighter?

    For me, the fundamental question is: How does anything ever feel anything at all?