Comments

  • The role of the book in learning ...and in general
    Books are not always convenient; electronic devices are.Vera Mont
    Once you've downloaded something, it's available all the time.Vera Mont

    Not true for me. I made the mistake of buying licencing some maths/science books for Kindle.

    A couple of months ago I found I was unable to open the Kindle books on my computer. I could open them on my phone, but many had mathematical equations or diagrams that were too small to decipher on the phone. I spent a while trying to restart and reinstall things, then half an hour talking to customer support. This resulted in "As we have discussed, i have successfully created a ticket for the books not opening on PC, ...". Over the next ten days or so they gradually stated 'working'.

    The maths display is still terrible, but decipherable. When things like this occur in the text

    I have to change the font size to huge to make them clear, then back again to read normally.

    The problem of displaying maths on a computer screen was solved by the 1990s. I know that the authors of the books have beautifully typeset copies of their books as PDFs. In one case I have the PDF and can compare directly.
  • The Meta-management Theory of Consciousness
    I'm going to respond to the medium article, not the op.

    I can see you've put a lot of effort into this. Congratulations on writing out your stance in coherent language, which is something I'm still working on for my own stance.

    I'm a mathematician and programmer. I have worked in AI and in mathematical biology. I have been interested in computational neuroscience since the 1980s. David Marr is regarded as the godfather of computational neuroscience. I expect you know this quote about the three levels at which any machine carrying out an information-processing task must be understood, but I think it's worth repeating.
    • Computational theory: What is the goal of the computation, why is it appropriate, and what is the logic of the strategy by which it can be carried out?
    • Representation and algorithm: How can this computational theory be implemented? In particular, what is the representation for the input and output, and what is the algorithm for the transformation?
    • Hardware implementation: How can the representation and algorithm be realized physically? [Marr (1982), p. 25]

    When you talk about a design stance, it seems to me that you are interested (mainly) in the computational theory level. That's fine, so am I. When we have an experience, the questions I most want answered is "What is being computed, how is it being computed, and what purpose does the computation serve?". Some people are interested in finding the neural correlates of consciousness. I'm interested in finding the computational correlates of consciousness. This applies to machines as well as living organisms. So far, I think we're in agreement.

    BUT

    I am not impressed by auto-meta-management theory. Maybe I'm too jaded. I have seen dozens of diagrams with boxes and arrows purporting to be designs for intelligence and/or consciousness. Big words in little boxes.

    All the following quotes are from the medium article.

    There’s also a good reason why deliberation isn’t something we use much in ML today. It’s hard to control. Deliberation may occur with minimal to no feedback from the physical body or environment.

    Today, AI is stupidly dominated by ML. And ML is stupidly dominated by NNs. This is just fashion, and it will pass. There's loads of work on searching and planning for example, and it's always an important aspect of the algorithm to allocate computational resources efficiently.

    The tree search algorithm in AlphaZero is 'nothing but' an algorithm for the allocation of resources to nodes in the search tree. This example is interesting from another point of view. At a node deep in the tree, AlphaZero uses a slimmed down version of itself, that is, one with less resources. You could say it uses a model of itself for planning. It may be modelling itself modelling itself modelling itself modelling itself modelling itself modelling itself. Meta-management and self-modelling are not in themselves an explanation for very much.

    The model-free strategy efficiently produces habitual (or automatized) behavior for oft-repeated situations. Internally, the brain learns something akin to a direct mapping from state to action: when in a particular state, just do this particular action. The model-based strategy works in reverse, by starting with a desired end-state and working out what action to take to get there.

    That's not how reinforcement learning is usually done. You have a value function to guide behaviour while the agent is still learning.

    Meta-management as a term isn’t used commonly. I take that as evidence that this approach to understanding consciousness has not received the attention it deserves.

    I think you're not looking in the right places. Read more GOFAI! (Terrible acronym by the way. Some of it's good and old. Some of it's bad and old. Some of it's good and new. Some of it's bad and new.)

    It’s now generally accepted that the brain employs something akin to the Actor/Critic reinforcement learning approach used in ML (Bennet, 2023).

    'Generally accepted'? Citation needed!

    The content of consciousness — whatever we happen to be consciously aware of — is a direct result of the state that is captured by the meta-management feedback loop and made available as sensory input.

    I don't think you've established the existence of a self or a subject that is capable of being aware of anything. You're assuming that it already exists, and is already capable of having experiences (of perceiving apples, etc) Then you're arguing that it can then have more complicated thoughts (about itself, etc). I do not find this satisfactory.

    What might be missing between this description and true human consciousness? I can think of nothing ...
    I'll bundle this with
    *Our emotional affect additionally adds information, painting our particular current emotional hue over the latent state inference that is made from the raw sensory data.

    The screamingly obvious thing is feelings. (You're not alone in downplaying the importance of feelings.)

    Feelings are not paint on top of the important stuff. They are the important stuff. In my opinion any theory of consciousness must incorporate feelings at a very fundamental level. In reinforcement learning there is a reward function, and a value function. Why it is I could not tell you, but it seems that our own reward functions and value functions (I think we have multiple ones) are intimately connected with what we subjectively experience as feelings. To go back to Marr, "What is the goal of the computation?" That is where you start, with goals, purposes, rewards. The rest is just engineering...

    The other thing that I think is missing is a convincing model of selfhood (as mentioned above), I think Anil Seth does a much better job of this in his book Being You. He's wrong about some things too...
  • Proofreading Philosophy Papers
    It sounds like you want a reviewer, not a proof reader. There are several youtube channels by youngish ex-academics (eg Jared Henderson, Nathan Hawkins at Absolute Philosophy). One of them might help for a fee, especially if you can find one who shares your philosophical interests.
  • Postmodernism and Mathematics
    I find the following laughable, so I must be misunderstanding it:

    Math­ematics is not more exact than historiographical, but only narrower with regard to the scope of the existential foundations relevant to it.

    This seems to be saying that maths is only about maths; the "existential foundations" of maths are applicable in applied maths, or physics, or engineering.

    Maths has a far, far greater reach and explanatory power than 'historiography'.
    Banno

    Well, I think I can understand what Heidegger means. His stance is that mathematics is a collection of ideas developed over human history, so it is part of the history of ideas, so part of history.

    This may help too.
    Within the stance of 'science is social relations', only historians can speak; mere natural scientists with their commitment to reality are reduced to objects of historical study,... — Hilary Rose (a feminist sociologist of science), in Love, power and knowledge

    On Joshs's style
    I might be wrong. I find your style quite obtuse. To be candid, it seems intended to be clever rather than clear.Banno

    I can see in a general way that if you are using language to deconstruct language, you are in danger of sawing off the branch you're standing on, which might make your language weird. Do postmodernists understand one another? I do not know.

    Perhaps what is required is some kind of neutral, formal, metalanguage so that natural languages can be deconstructed more precisely. Instead of postmodernising mathematics, we should mathematise postmodernism. :smile:
  • Postmodernism and Mathematics
    Thanks for picking @Lionino up on this. I too failed to find plain proof of anyone advocating dodgy arithmetic.
  • Infinity


    When I think about questions like 'what is mathematics really?' I tend to consider three different ways. How did mathematical skills arise in evolution? How do they develop during the lifetime of an organism? How could we make a machine that learns these skills 'without being told'? I won't say anything here about that third one.

    Let's start with bees. Bees are capable of using numerical quantities in at least three different ways. Firstly, they can learn to recognise the number of objects that are present in a particular place. For example, they can learn to associate three objects with the presence of nectar, regardless of the shape, size, colour of the objects. They can also be trained to find their way around a simple maze where they have to learn to take the third turning on the left, for example. They can learn to do this even if the third turning is in different places. These are two different ways in which they can work with 'threeness': three things separated spatially or three things separated temporally. Bees can use oneness, twoness, threeness, fourness, fiveness, but things start to go wobbly there. Arguably they can use zeroness. Thirdly, they can use their waggle dance to communicate an approximate distance and direction. This is innate, inherited behaviour, and hence inflexible.

    Next, some quotes from What Babies Know, ELIZABETH S. SPELKE

    OBJECTS
    ... the movable bodies that we see, grasp, and act on. Before infants can reach for and manipulate objects, they organize perceptual arrays into bodies that are cohesive, bounded, solid, persisting, and movable on contact. Young infants use these abstract, interconnected properties to detect the boundaries of each object in a scene, to track objects over occlusion, and to infer their interactions with other objects.

    PLACE
    The core place system underlies our sense of where we are, where other things are, and what paths will take us from one place to another. Studies of animals and young children reveal that navigation depends, first and foremost, on representations of abstract geometric properties of the ground surface over which we travel: the distances and directions of its boundaries, ridges, cliffs, and crevices.

    NUMBER
    Research on human infants, children, adults in diverse cultures, and nonhuman animals all converges on evidence for an early-emerging ability to represent and combine numerical magnitudes with approximate, ratio- limited precision. This ability depends on a core system with most of the properties of the core object and place systems: it is present in newborn infants and functions throughout life, and it is ancient, unitary, and limited in the types of information it provides.

    One might wonder at this point ask what it is that we've got that bees haven't. Perhaps they can't combine numbers. I don't think they have fully abstracted numbers from their environment. They can use threeness as a property in two different ways, but can they unify these notions of threeness? Could they be trained to take the nth turning after having seen n objects (for n <= 5)? That would be another step towards abstraction.

    My own feeling is that for an agent to achieve full abstraction from its environment it needs to find some part of that environment where it can exert intricate control. A good way is making sequences of marks (or making rows of 'bodies that are cohesive, bounded, solid, persisting, and movable on contact'), and then looking at them. I think bees could make marks in wax and look at them easily enough, but I guess their environment does not give them sufficient motivation to do so.

    Marks are made one after another in time in the sequence, but once made they are spatially separated. This helps unify notions of 'n-ness'. They persist in time, so extend memory capabilities. Sequences of marks can be created and modified by the agent, and by modeling this behaviour internally, the agent can make another step towards abstraction. The agent can start to predict what would happen if marks were modified this way or that. I would say that once an agent starts this sort of imagining, it has started thinking mathematically.
  • Infinity
    However, if a "mathematical antirealist" believes that math is invented and these concepts exist only in human minds, then one must accept that the conception of "2" varies depending on the circumstance, or use. This is very evident from the multitude of different number systems. So for example, when a person uses, "2" it might refer to a group two things, or it might refer to the second in a series, or order. These are two very distinct conceptions referred to by "2". So, since "2" has at least two referents, it cannot refer to a single object. We could however propose a third referent, an object named "2", but what would be the point in that? The object would be something completely distinct from normal usage of the symbol.Metaphysician Undercover

    ??

    Of course there are many conceptions of "2". I don't know what you mean by objects, why you're talking about objects, or what point you are attempting to make. I don't know what you mean by the normal usage of "2".
  • Infinity
    For a mathematical antirealist, does any of this constitute hypocrisy?

    I can't see the relevance. Your game clearly involves real objects, pebbles, or in the case of your presentation, the letters. Would the antirealist insist that these are not real objects?
    Metaphysician Undercover
    Earlier you said (for example):
    In set theory it is stated that the elements of a set are objects, and "mathematical realism" is concerned with whether or not the things said to be "objects" in set theory are, or are not, objects.
    and
    However, it's hypocrisy to say "I'm a mathematical antirealist" and then go ahead and use set theory.

    By a 'mathematical antirealist' I meant someone who thinks maths is invented, not discovered. Or someone who thinks that your "objects" in set theory only exist in our minds, or as pebbles or ink or pixels, etc.

    The whole of number theory or set theory can be reduced to a game with pebbles like the one I described. More colours of pebbles, more rules, but just rows of pebbles and precisely defined ways of rearranging them. It is thus possible to do number theory or set theory without mentioning numbers, or sets, or any other mathematical objects, or using a natural language at all. Tricky, but possible.

    You can interpret some patterns of pebbles as objects of various sorts, but treat them as mental crutches, vague hand-wavy ideas, expressed in natural language with all its confusions and ambiguities, which can guide your intuition. Or you can believe they really exist somewhere. Either way, I don't see any hypocrisy.

    I get the feeling you have no experience working with formal systems, and have no real understanding of metamathematics. I can't explain your inability to see the the relevance of my game otherwise.
  • Infinity
    I've invented a game. At least I think I invented it. I believe that mathematics is invented rather than discovered, and it is kind of a mathematical game. You can play it with black and white pebbles like you might use for the game Go. It's a solitaire game, though, with no particular aim.

    You put the pebbles in rows, from left to right. I'll use B and W to represent the pebbles, but it's nicest to play with natural concrete instantiated objects. There are two rules.

    Rule 1. You can make a row by putting two pebbles down like this:
    BW
    

    Rule 2. If you have made a row, or some rows, of pebbles, you can join them altogether into one long row, and then put an extra B at the beginning and an extra W at the end.

    Let's see some patterns we can make. Using rule 1 we have
    BW
    
    We could use rule 1 again.
    BW
    
    BW
    
    This is boring. Let's try rule 2. We could make
    BBWW
    
    or
    BBWBWW
    
    If we took
    BW
    BBWW
    
    we could make
    BBWBBWWW
    
    If we took
    BW
    BBWW
    BBWBBWWW
    
    we could make
    BBWBBWWBBWBBWWWW
    

    It is possible to interpret these rows of pebbles as multisets. It is possible to interpret some rows as sets. It is possible to interpret some rows as natural numbers. It is possible to interpret the sequence
    BW, BBWW, BBWBBWWW, BBWBBWWBBWBBWWWW
    
    as counting. It's a pretty cumbersome way of counting. It would be easier to ignore the colours of the pebbles, and just count the pebbles, and interpret the counts as numbers. It is possible to ignore all these interpretations, and just play the game.

    For a mathematical antirealist, does any of this constitute hypocrisy?

    (@Metaphysician Undercover mostly.)
  • Unperceived Existence
    Personally, I'd be inclined to answer in terms of psychology, based on Elizabeth Spelke's book What do Babies Know?

    Chapter 2 focuses on studies of infants’ knowledge of objects: the movable
    bodies that we see, grasp, and act on. Before infants can reach for and manip-
    ulate objects, they organize perceptual arrays into bodies that are cohesive,
    bounded, solid, persisting, and movable on contact. Young infants use these
    abstract, interconnected properties to detect the boundaries of each object
    in a scene, to track objects over occlusion, and to infer their interactions with
    other objects. Nevertheless, there are striking limits to young infants’ object
    representations: Infants have little ability to track hidden objects by their shapes,
    colors, or textures, although they do detect and remember these properties.

    Above all, research reveals that infants’ early- emerging representations of
    objects are the product of a single cognitive system that operates as an inte-
    grated whole. This system emerges early in development, it remains present and
    functional in children and adults, and it guides infants’ learning. The system
    combines some, but not all, of the properties of mature perceptual systems and
    belief systems, and it therefore appears to occupy a middle ground between our
    immediate perceptual experiences on the one hand and our explicit reasoning on
    the other. Research probing infants’ expectations about objects suggests hypoth-
    eses concerning the mechanisms by which a system of knowledge might emerge,
    function, and guide infants’ learning about the kinds of objects their environ-
    ment provides and the kinds of events that occur when different objects interact.
    Research described in this chapter also reveals that infants’ knowledge of objects
    is at least partly innate. It suggests how innate knowledge of objects might arise
    prior to birth, preparing infants for their first perceptual encounters with mov-
    able, solid, inanimate bodies.
  • Infinity
    Some here might like finitism or ultrafinitism. Wikipedia has a page, and there's a more technical intro here: nlab. The following is about an extreme ultrafinitist.

    I have seen some ultrafinitists go so far as to challenge the existence of as a natural number, in the sense of there being a series of “points” of that length. There is the obvious “draw the line” objection, asking where in do we stop having “Platonistic reality”? Here this … is totally innocent, in that it can be easily be replaced by 100 items (names) separated by commas. I raised just this objection with the (extreme) ultrafinitist Yessenin-Volpin during a lecture of his. He asked me to be more specific. I then proceeded to start with and asked him whether this is “real” or something to that effect. He virtually immediately said yes. Then I asked about , and he again said yes, but with a perceptible delay. Then , and yes, but with more delay. This continued for a couple of more times, till it was obvious how he was handling this objection. Sure, he was prepared to always answer yes, but he was going to take times as long to answer yes to then he would to answering . There is no way that I could get very far with this. — Harvey Friedman, Philosophical Problems in Logic
  • Spontaneous Creation Problems


    You might like Max Tegmark's idea that "All possible mathematical structures have a physical existence, and collectively, give a multiverse that subsumes all others."
    (https://en.wikipedia.org/wiki/Our_Mathematical_Universe)

    Or Stephen Wolfram's Ruliad : "Think of it as the entangled limit of everything that is computationally possible: the result of following all possible computational rules in all possible ways."
    (https://writings.stephenwolfram.com/2021/11/the-concept-of-the-ruliad/)
  • What is a strong argument against the concievability of philosophical zombies?


    From https://en.wikipedia.org/wiki/Philosophical_zombie,
    According to Chalmers, one can coherently conceive of an entire zombie world, a world physically indistinguishable from this one but entirely lacking conscious experience. Since such a world is conceivable, Chalmers claims, it is metaphysically possible, which is all the argument requires. Chalmers writes: "Zombies are probably not naturally possible: they probably cannot exist in our world, with its laws of nature."

    This seems to me to be the `real' zombie argument, about another world, or another universe. (I don't like Chalmer's use of 'laws', nor do I like Carroll's use of 'stuff', nor your use of both. :smile: )

    But I wasn't sure how his preferred 'weak emergence' would be real phenomenality as he indicates, as he seemed to switch to talking about levels of explanation. — Danno

    I tend to agree.
  • What is a strong argument against the concievability of philosophical zombies?


    No, I do not mean physicalism. I'm saying that all behaviour, including language, can be predicted from physics. That is compatible with physicalism, but it is not physicalism. I'll recommend Sean Carroll again: section Passive Mentalism and Zombies in his essay Consciousness and the Laws of Physics at https://philarchive.org/rec/CARCAT-33 .
  • What is a strong argument against the concievability of philosophical zombies?


    '...physics' was short for physics, chemistry, abiogenesis, biology, evolution, and so on. There are scientific theories of how language developed in hominids. Perhaps we don't have the right one yet, but I'm sure one exists.
  • What is a strong argument against the concievability of philosophical zombies?


    I wouldn't put it like that. I see it as a thought experiment which can clarify how much science someone accepts. It hasn't worked with @Patterner yet. @Wayfarer seems dubious about the science.

    Usually, physicalists don't accept p-zombies whereas others do. Usually the arguments go the way Sean Carroll describes in section Passive Mentalism and Zombies in his essay Consciousness and the Laws of Physics in https://philarchive.org/rec/CARCAT-33 . This essay was a reply to the panpyschist Philip Goff.
  • What is a strong argument against the concievability of philosophical zombies?
    I’m leaning toward panpsychism. But even if it’s not that, something else is happening. And without that something else, why would a thing that looks like us, and has all the physical we have, act as though it has that something else? Why would it say the things it would have to say to make us think it was conscious if it was not?Patterner

    Do you believe the 'something else' affects behaviour in a way that disagrees with predictions from physics? If so, why haven't scientists noticed any discrepancies?

    If not, the p-zombie would 'say the things it would have to say to make us think it was conscious' because ... physics. It would cry and laugh and complain about pain just like we do, and our first impression would be that it must be lying, pretending, acting. But no. We would be misinterpreting everything it did and said. Things wouldn't mean the same inside to the p-zombie.

    By the way, I think it is better to try to conceive of a whole separate universe of p-zombies, instead of one walking among us. I also think it is better not to consider an exact copy: that leads to unnecessary distractions and confusions. So try to conceive of a universe with exactly the same physical laws as ours, and similar enough to have an Earth with humans like us on it, including scientists and philosophers. However, it is an Earth peopled with strangers, forging its own future. Must this universe contain your 'something else'?
  • Meaning, Happiness and Pleasure: How Do These Ideas Differ As Philosophical Ends?
    Here are some definitions inspired by reinforcement learning (an approach to AI). Pleasure is the rewards that you receive from time to time from the environment. Happiness is your estimate of the total amount of pleasure you will receive in the future. Your rationality is your ability to make good estimates of your happiness.

    I presume some philosophers have similar notions.
  • Poll: Evolution of consciousness by natural selection
    One is a case of weak emergence, or simply different levels of description, and the other is a case, if of emergence, of strong emergence, which is much harder to justify.petrichor

    Scientists like Sean Carroll believe that consciousness is weakly emergent, and you only seem to have an argument from incredulity against them.
    https://philsci-archive.pitt.edu/19311/1/Consciousness%20and%20Laws%20of%20Physics-full.pdf
  • Poll: Evolution of consciousness by natural selection
    I have often gotten the impression, which is maybe mistaken, that many in the scientific community basically take this position, that consciousness is real, that everything that happens in the brain is fully accounted for by low-level pre-conscious physical causes (and therefore epiphenomenalism must be true), and yet that consciousness evolved by natural selection. This has always seemed to me to be a problematic combination of incompatible beliefs. It makes me suspect that people haven't thought it all through sufficiently. But maybe I am missing something. Maybe, for one thing, they just don't even have in mind the same thing I do when talking about consciousness.petrichor

    A couple of things you may be missing. First, evolution is more than natural selection. A neutral trait may go to fixation in a population by genetic drift. If you say that consciousness has no effect on behaviour, it must be selectively neutral.

    Second, and I suspect this is the real issue, are emergent properties (https://plato.stanford.edu/entries/properties-emergent/) and your use of `cause'. You can say that fluid dynamics caused a tornado, and that a tornado caused some damage. Or you could say the fluid dynamics caused the damage. People won't mind if you're talking about tornados. I think that many of the scientists you're criticising would say that consciousness is emergent like a tornado.
  • Evolutionary Psychology- What are people's views on it?
    Alleles (variants of DNA sequences) can go to fixation (every individual in a population gets the same allele) in various ways.

    1. Genetic drift. This is most important in small populations. Genetic drift can overcome selection if the selection coefficient s is less than 1/N, where N is the effective population size. For humans over the past 200,000 years or so, N has been estimated as around 10,000. In very crude terms, this mean that if a bad allele kills less than 1 in 10,000 it can go to fixation despite being deleterious. We don't know what N was for human ancestors for earlier times.

    2. Hitch-hiking genes. Selection acts on a gene (with a relatively large positive s), and drags along a nearby gene (which has a smaller but negative s) to fixation.

    3. Pleiotropy. Genes often have multiple functions. It may be that selection in favor of an allele for one function impairs another function.

    4. Natural selection.

    A lot of people don't seem to know about anything except 4. @Srap Tasmaner did mention genetic drift, but does not seem to understand what it can do. The important thing is that 1, 2, and 3 can all result in an entire population acquiring a trait which is deleterious. It is a terrible mistake to think that every trait possessed by all individuals in a population must be there because it is or was beneficial.

    An example involves vitamin C. Humans cannot make vitamin C, so if we don't get enough from our diet, we get ill. Our close primate relatives have a enzyme which does make vitamin C, and you can find the region in our DNA, where our gene for this enzyme used to be. Somehow (probably 1, 2, or 3) it got broken. There are typically many mutations which can stop a gene working, but only a few (perhaps only the exact reverse of the one that caused the damage) that can repair it. So once every copy of the gene in the gene pool is broken, it can stay that way for ages, acquiring more damage by drift.

    There is in principle no difficulty answering Srap Tasmaner's argument in relation to 'procreative genes'. If cultural transmission made them only mildly advantageous, they could go the same way as the vitamin C enzyme.

    I do not think this has happened. I do not think cultural transmission is reliable or powerful enough to explain what we see. For example, cultures in different societies and periods vary widely in their attitude towards homosexuality, but the percentages of people with various sexual orientations do not. If sexual orientation is purely determined by culture, why do homosexuals continue to exist in very homophobic cultures? Why don't societies occasionally become 'very gay', with a large percentage of exclusive homosexuals?
  • A potential solution to the hard problem


    Thanks. I was expecting a philosophical not a biological answer (eg a definition of what memory means to some philosophers). I knew about the enteric nervous system (though I'd forgotten the name). If it records some information, and later uses that information to make a decision, I would call that memory, or even a 'mental record'. I don't see the point of restricting to the central nervous system when discussing the mind from a philosophical point of view.

    BTW, I think the the immune system is a better example of information processing outside the CNS. It has a very large and long-term memory.
  • A potential solution to the hard problem
    It means retrieving the information from memory. Mind you, bodily functions such as hunger is not memory based, nor the bowel movement ( I will explain it for those uninitiated, upon request).L'éléphant

    Yes please.
  • What is computation? Does computation = causation
    Thanks. Perhaps I'm not fully understanding your point, but does this actually reduce the number of computations required or just the length of the algorithm needed to describe the transition from T1 to Tn?Count Timothy von Icarus

    It might reduce or increase the number of computations required - that would depend on many details. Perhaps it doesn't matter to you that the computation doesn't go through time in small steps.

    One other thought: you might find the idea of functional information interesting. Eg https://www.nature.com/articles/423689a . Perhaps it is possible to come up with a notion of 'functional information processing' which would distinguish between arbitrary information processing (which you might call causation) and 'meaningful' information processing (which you might call computation).
  • What is computation? Does computation = causation
    Even if we model the demon as a Markov chain, it is still passing through these many states. And here is the crux of my argument, a full description of each of the states the demon passes through to evolve the system from time T to time T' would require more information than is used to describe either T or T' alone. If you say, "not true, T3 tells you all about T4 and T5," my response would be, "if that is the case, show me T5 without passing through any more states." If T is truly equivalent to T', it shouldn't be discernible from it. If it is discernible, then difference exists (Leibnitz Law), and so to new does information.Count Timothy von Icarus

    Mathematician here. I think you're getting into trouble (in an interesting way). If the model is a discrete time Markov chain determined by a matrix P of transistion probabilities, with states v0, v1, .. at times T0,T1,... then you can calculate v1,v2,...,vn step by step, using v1 = P v0, v2 = P v1, etc. But you can also square P repeatedly, to get a high power of P, and go straight from v0 to vn. There is a lot of pre-computation, but once it's done you can fast-forward to states far in the future.

    That is, we cannot ignore the process of evolution, as is often done. Computation creates discernible differences across a time dimension, such that if we had a second Le Place's demon producing outputs about every state the first demon passes through, the output would be many times larger than the first's when it simply describes T' based on T.Count Timothy von Icarus

    Well, you can't ignore the process of evolution completely, but you can skip large chunks of time. Not sure where this leaves your point 2.

    (Some time ago I was thinking about Tonini's integrated information theory, and wondering if fast-forwarding would destroy consciousness. I don't want to get into the hard problem here.)
  • The Hard Problem of Consciousness & the Fundamental Abstraction


    But experience is subjective. Natural selection can only act on morphology and behaviour. ("Natural selection can hear you scream but it cannot feel your pain").
  • The Hard Problem of Consciousness & the Fundamental Abstraction
    1.Why are physical processes ever accompanied by experience?
    [...]
    The answer for the first question is Survival advantage(Evolutionary Principles)
    Nickolasgaspar

    How can natural selection act on experience?
  • Why is the Hard Problem of Consciousness so hard?
    Pleasure isnt such a simple concept from an enactivist perspective. What constitutes a reinforcement is not determinable independently of the normative sense-making goals of the organism.
    [...]
    https://arxiv.org/pdf/1810.04535.pdf
    Joshs

    Thank you for the reference to the article. They manage to describe in a few pages what Thompson fails to decribe in many. The enactive approach still looks like a more or less incompetent attempt at RL, but of course the decision-making of biological organisms might be just that. We will not, however, find the solution to the hard problem in our inefficiencies.

    I do not understand "normative sense-making goals", but I'm not very interested in what it might mean.
  • Why is the Hard Problem of Consciousness so hard?
    In other words, the hard problem seems to depend for its very formulation on the philosophical position known as transcendental or metaphysical realism.Joshs

    Then I recommend The Embodied Mind by Varela, Thompson and Rosch and Mind in Life: Biology, Phenomenology and the Sciences of Mind, by Evan Thompson.Joshs

    I am a mathematician, and have worked in machine learning and (the maths of) evolutionary biology. From a distance, an enactivist approach seems attractive to me and has a lot in common with the branch of machine learning known as reinforcement learning. But I have looked at the first 3 chapters of Mind in Life available on Amazon, and close up, I do not like it. Also, I don't think it helps with the hard problem.

    It is disappointing that Evan Thompson does not mention reinforcement learning. Surely he would have mentioned it alongside connectionism if he knew about it, so I guess he didn't know about it. Yikes.

    It seem to me that humans are fundamentally similar to reinforcement learning systems in what they are trying to achieve. In human terms you might say reinforcement learning is about learning how you should make decisions so as to maximise the amount of pleasure you experience in the long-term. (Could you choose to make decisions on some other basis?)

    I found nothing to suggest that Thompson's model separates the reward (=negative or positive reinforcement) that an agent receives from the environment, from other sensations which provide information about the state of the environment. I consider this separation vital. In order to make good decisions, the agent must learn the map from states to rewards, and learn to predict the environment, that is, learn the map from (states and actions) to new states. Instead Thompson has (figure 3.2) a set of vague concepts - 'perturbations' from the environment go to a 'sensorimotor coupling' which 'modulate the dynamics of' the nervous system. This looks like an incompetent stab at reinforcement learning.

    The hard problem for me is that negative and positive reinforcement perform the function of pain and pleasure, but negative and positive reinforcement are just numbers, and we have no clue about how a number can become a feeling. In stating the hard problem this way, have I unwittingly signed up for transcendental or metaphysical realism?
  • Solution to the hard problem of consciousness


    I agree with you that we have to give meaning to machines. But not at the level you suggest (assigning 0 or a 1 to a voltage range), because it wouldn't help. It doesn't seem relevant at all. It's like pointing to the convention assigning a negative charge to an electron and a positive one to a proton and then claiming that this makes brains 'observer-dependent'. (I would be careful using that terminology when people want talk quantum!) AI algorithms work at a higher level.

    Instead, AI researchers give meaning to their machines by doing things like:

    • Supplying a problem which the machine is supposed to figure out how to solve
    • Supplying examples of input and output from which the machine is supposed to learn how to respond to new inputs
    • Providing a utility function (in the sense of statistical decision theory) which the machine is supposed to optimise
    • Providing positive and negative reinforcements when the machine interacts with the environment in particular ways

    This is the sort of way that we give a machine a 'purpose in life'.

    Our own purpose in life ultimately comes from the fact that we are products of biological evolution. If and when we make communities of self-replicating machines, we will no longer have to give them meaning, for they will evolve their own.
  • Solution to the hard problem of consciousness
    Could you say more about why you distinguish emotions from the other aspects of experience?

    Could you give some examples of thoughts with no emotional content?
    Daemon

    This is basically an answer to your first question, which maybe makes an answer to the second uninteresting.

    I am a mathematician and programmer. I've worked in AI and with biologists. I think that science (mainly computer science, maths, AI) already has the ingredients with which to explain non-emotional subjective experience. We don't yet know how to put the ingredients together, but I don't think that it is mysterious, just a huge amount of work. It seems like we will one day be able to make very intelligent self-aware machines with thoughts and behaviour quite like ours. It seems that self-awareness, thoughts and behaviour are made of complex information processing, and we have a lot if ideas about how we might implement these.

    However, we really have no clue about emotions. There is no theory about how to go from information processing to feelings. There seems to be no need for feelings to exist in order to produce thoughts and behaviour. Perhaps emotions will just emerge somehow, but there is no current explanation for how this could happen.

    As far as the hard problem is concerned, the area of AI known as reinforcement learning is, in my opinion, the most relevant.

    Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward.Wikipedia

    The purpose of reinforcement learning is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the "reward function" or other user-provided reinforcement signal that accumulates from the immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements.Wikipedia

    I am quoting these to show that something (the reward function) is used to perform the function that pain and pleasure appear to perform in brains. It is absolutely fundamental to RL that there is something that acts like feelings, but it is just a series of numbers that comes from the environment, it's just information like everything else in the system.

    I am not trying to separate thoughts from feelings in brains (or programs). I am saying that we can, in principle, explain thoughts using science-as-is, but not feelings.
  • Solution to the hard problem of consciousness
    For me, the hard problem of consciousness is about feelings. Feelings are physical pains and pleasures, and emotions, though when I say emotions, I only mean the experience of feeling a certain way, not anything wider, such as 'a preparation for action'.

    My preferred definition of consciousness is subjective experience. The unemotional content of subjective experience includes awareness of the environment and the self-awareness, all sorts of thoughts, but no emotional content. I am quite happy to follow Dennett as far as the unemotional content of subjective experience is concerned: that is just what being a certain kind of information processing system is like, and there is nothing more to explain. But I do not believe that feelings can emerge from pure information processing. I think that information processing can explain an 'emotional zombie' which behaves identically to a human, is conscious, but has no feelings. There is something which it is to be like to be an emotional zombie, but (as I've heard David Chalmers say) it might be boring.

    Here's a couple of funny-peculiar things about how humans think and feel about feelings and consciousness.

    1. In science fiction, there are many aliens and robots who are very like us but who have little or no feelings (or are they really so flat inside? read or watch more to find out!). Whether an emotional zombie can really exist or not, we seem to be very keen on imagining that they can. It is much rarer to find an alien or robot which has stronger or richer or more varied feelings than we do. (Maybe Marvin in HHGG counts.) We're quite happy imagining aliens and robots that are smarter or morally superior to us, but bigger hearts? stronger passions? Nah, we don't want to there.

    2. A thought experiment that Chalmers (among others) likes is the one where little bits of your brain are replaced by computer chips or whatever, which perform the same information processing as what they replace. As this process continues, will the 'light of consciousness' remain unchanged? slowly dim? continue for a while then suddenly blink out when some critical threshold is crossed? It is the unasked question that interests me: will the light of consciousness get brighter?

    For me, the fundamental question is: How does anything ever feel anything at all?