Comments

  • Moravec's Paradox
    Thanks for your reply.

    We are using language very differently, particularly the word emotion. It's hard to tell how much we disagree about feelings though I certainly disagree about the possibility of AI having feelings (though exactly how we disagree is unclear). When talking with Malcolm Letts I was discussing the hard problem. My version of the hard problem is: how can anything ever have any feelings at all? I will start by defining how I want to use the word feelings in this thread.

    I try to follow psychologists when using words like feeling and emotion because I figure they're the experts who study these things. Mind you, psychologists don't agree about these things so I pick and choose the psychologists I like ;-)

    I use 'feelings' to mean bodily pains and pleasures, and the subjective experience of emotions and
    moods. It is a very flexible word, and I want to restrict its meaning. People often use the words emotion and feeling as synonyms. But psychologists (so far as I can see) regard feelings as only one part of emotion. For example Scherer’s Component Process Model:
    • Cognitive appraisal: provides an evaluation of events and objects.
    • Bodily symptoms: the physiological component of emotional experience.
    • Action tendencies: a motivational component for the preparation and direction of motor responses.
    • Expression: facial and vocal expression almost always accompanies an emotional state to communicate
      reaction and intention of actions.
    • Feelings: the subjective experience of emotional state once it has occurred.
    You'll notice this is quite backwards from the way you are using the word emotion. You seem to be referring to the way we talk about emotions after all these five components including the feeling have happened. I am not very interested in the way we talk about emotions (and I am completely uninterested in the way ChatGPT talks about emotions).

    I am excluding the meanings of feelings that relate to intuition (‘I feel 87 is my lucky number’) and the sense of touch (‘feeling my way in the dark’).

    I am also excluding uses of the word such as “feelings of identification with the particular object that happens to be your body” (Anil Seth) and your "feel [a bond]" where I am not clear what is meant, but it is something more general than the narrow way I want to use the word. Probably these are complex experiences with multiple components, some of which are feelings of the sort I want to talk about.

    I'll go through the model again with your example
    • Cognitive appraisal: Your brain must recognise what it is you're holding before you can have any reaction.
    • Bodily symptoms: I'm sure your heart rate increased, whether you were aware of it or not.
    • Action tendencies: holding a newborn baby needs a load of sensorimotor processing.
    • Expression: I'm sure your face showed something, whether you were aware of it or not.
    • Feelings: I won't venture to say anything.
    Note that only the fifth component is necessarily conscious. The others may or may not be. I would quibble about Scherer’s 'once it has occurred'. The cognitive appraisal must come first, or at least start first, but I'd expect the other four to occur in parallel.

    Your conscious mind lags about 1/3 of a second behind reality. That's over three hundred million nanoseconds, enough time for your brain to process something like a million million bits. In top-level tennis, a player must return a serve before they are consciously aware that the ball has left the server's racquet. The conscious mind is so slow that everything seems instantaneous to it. I think there is a lot of calculation involved to produce a feeling.

    Enough for now. Later, I hope will shake your confidence a bit about AI never being able to have feelings.
  • Moravec's Paradox
    I am a mathematician and programmer. I've been interested in AI since the 1980s. I don't particularly remember Moravec's paradox but a lot of people were saying similar things at that time. Here are three things I do remember.

    1. David Marr was a biologist turned computer scientist. He is sometimes known as the father of computational neuroscience. You can think of computational neuroscience as being like AI but restricted to use only algorithms which the brain might plausibly use, and to only use data of the sort that humans have access to during their lives. I think there is so much wisdom in this quote.
    If we believe that the aim of information-processing studies is to formulate and understand particular information-processing problems, then the structure of those problems is central, not the mechanisms through which their solutions are implemented. Therefore, in exploiting this fact, the first thing to do is to find problems that we can solve well, find out how to solve them, and examine our performance in the light of that understanding. The most fruitful source of such problems is operations that we perform well, fluently, and hence unconsciously, since it is difficult to see how reliability could be achieved if there was no sound underlying method.

    Unfortunately, problem-solving research has for obvious reasons tended to concentrate on problems which we understand well intellectually but perform poorly on like mental arithmetic and cryptarithmetic, geometry theorem proving, or the game of chess - all problems in which human skills are doubtful quality and in which good performance seems to rest on a huge base of knowledge and experience.

    I argue that these are exceptionally good grounds for not yet studying how we carry out such tasks. I have no doubt that when we do mental arithmetic we are doing something well, but it is not arithmetic, and we seem far from understanding even one component of what that something is. I therefore feel we should concentrate on the simpler problems first, for there we have some hope of genuine advancement.
    — David Marr, Vision, 1982

    2. Douglas Hofstadter's essay 'Waking up from the Boolean Dream' (1982). It's 22 pages long, so these are tiny snippets from it. In 1980 AI researcher Herbert Simon said "Everything of interest in cognition happens above the 100 millisecond level - the time it takes you to recognise your mother." Hofstadter takes the opposite viewpoint "Everything of interest in cognition happens below the 100 millisecond level - the time it takes you to recognise your mother." One subtitle in the essay is "Not Cognition, But Subcognition Is Computational".

    3. John Holland's classifier systems and in particular the paper Escaping Brittleness (1986). Holland's classifier systems are sometimes described as the first fully-fledged reinforcement learning system in AI. The brittleness being escaped here is the brittleness of expert systems.
    In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert.[1] Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural programming code. — Wikipedia

    In my opinion, reinforcement learning is the most important part of AI for philosophers to understand. It is especially relevant to understanding the way our brains work if it is restricted in the way that I described above for computational neuroscience.

    Sadly there doesn't seem to be anyone except me on TPF who understands reinforcement learning or shows much interest in learning about it. There was once. I hoped to have a discussion with @Malcolm Lett. But as soon as I made a comment (https://thephilosophyforum.com/discussion/comment/900869) on his OP he disappeared from TPF and has never posted since. I live in hope.

    @ENOAH, I agree that feelings are central. Replying to Malcolm Lett's "Our emotional affect additionally adds information, painting our particular current emotional hue over the latent state inference that is made from the raw sensory data", I said
    Feelings are not paint on top of the important stuff. They are the important stuff. In my opinion any theory of consciousness must incorporate feelings at a very fundamental level. In reinforcement learning there is a reward function, and a value function. Why it is I could not tell you, but it seems that our own reward functions and value functions (I think we have multiple ones) are intimately connected with what we subjectively experience as feelings. To go back to Marr, "What is the goal of the computation?" That is where you start, with goals, purposes, rewards. The rest is just engineering...GrahamJ

    Reward functions and value functions are technical terms from reinforcement learning.

    The central role of value estimation is arguably the most important thing that has been learned about reinforcement learning over the last six decades. — Barto and Sutton, Reinforcement Learning, 2018
  • The Nihilsum Concept


    You could try the Wikipedia page on qubits. It explains things better than I could. If Wikipedia does not meet your standards, well, qubits are a hot topic and there's plenty of other accounts.

    In a another thread, you cited https://arxiv.org/abs/2405.08775v1:
    Oh, and there are paraconsistent logics that are being used in non-woo quantum mechanics.Banno

    Did you read it? Did you understand it? Did you feel an urge to ask the authors what the fuck they meant by equation (2)?
  • The Nihilsum Concept
    What the fuck is "|ψ⟩=α|nonexistence⟩+β|existence⟩"?Banno

    Dumbed-down quantum theory. I guess this quote is more your level: ‘it’s very hard to talk quantum using a language originally designed to tell other monkeys where the ripe fruit is.'
  • The Nihilsum Concept
    Seems we must conclude it's a representation of a state.
    — Moliere

    A state of what?
    T Clark

    It sounds like a qubit.

    A pure qubit state is a coherent superposition of the basis states. This means that a single qubit ψ can be described by a linear combination such as:
    |ψ⟩=α|nonexistence⟩+β|existence⟩
    where α and β are the probability amplitudes, and are both complex numbers.
    — adapted from wikipedia
  • The universality of consciousness
    Perhaps I should have said my lucid dreaming self has limited mental capacity compared to my waking self. Still, my first thought in my first lucid dream was "I don't have to go down!" which since I was high in the air at the time was tantamount to "I can fly!". No big insights into the nature of reality.
  • The universality of consciousness
    I have had lucid dreams since I was a teenager in the 1970s, though they have declined a lot in frequency over the past couple of decades.

    In a lucid dream, our perspective of these dream characters is different from our perspective of people who are “real”, because we are taught that these people are not conscious, even if they act the same way that “real” people do.Reilyn

    I don't think I was ever taught that dream characters are not conscious.

    I did not treat dream characters with respect when I was younger. I gradually took them more and more seriously, not because I came to some conclusion about their degree of consciousness, but because it seemed more intriguing to see what they had to say about themselves. For example, I became interested in how do they react when you say something like "you do realise that this is all a dream, don't you?".

    The fact is, however, that these people do have consciousness, but they do not have a separate consciousness. Their actions and decisions are consequences of our own consciousness.Reilyn

    I disagree. I think they have separate consciousnesses. Sure, they are a product of non-conscious processes in my brain. They are my dream characters, not yours. But they are not my conscious creations. Surely you have been surprised by what some dream characters do and say in lucid dreams? In order to surprise you, they must have private access to their own information processing. They also seem to have agency within the lucid dream: they are pursuing their own goals, and these goals are not known to me except by how they manifest in their behaviour.

    They have limited consciousness compared to my lucid dreaming self. My dream characters appear to be unable to remember anything for much more than 10 seconds. Some have enough mental capacity to tell me a simple (and not very good) joke, but nonetheless a setup and a punchline.

    As a side note, my lucid dreaming self has limited consciousness compared to my waking self. I can be pretty analytical in some lucid dreams, but there's almost always some stupidity which is obvious when I recall the dream.
  • Notes on the self
    Now I've described the reputational self I can give a sort of an answer to the OP.

    Descartes' self stays within the confines of the public relations department. What can the PR dept really trust? It can't be sure about the rest of the organisation or the apparent world out there.

    The Cartesian self is the illusion arising within the PR dept that it is the whole organisation, and/or that it is in charge of the whole organisation.

    I'll pass on Anscombe.

    Why do we always fall reflexively back to a Cartesian perspective? I agree with Taylor above that morality and the emotions associated with it are the real power source for the self. My question is: is that always going to be a Cartesian self? I think it might be that everytime we go to explain the self, we'll automatically conjure some kind of independent soul. What do you think?frank

    I think that since the reputational self has the job of representing the organism to others, it must be able to explain the organism to other similar organisms, so it easily takes on the role of explaining the organism to itself. None of the other of Seth's selves has the wherewithal to talk about the organism. So you're kind of stuck with interacting with the reputational self, at least as a kind of gatekeeper to other selves, whether you're asking others about their consciousness, or introspecting your own.
  • Notes on the self
    How would you interpret the Reputation element of the diagram? Does it refer to how a person sees himself, or to how the person thinks others see himself?Gnomon

    I think the Reputation element in the diagram is intended to be the person's reputation among others. It is their actual reputation which they cannot know themselves.
    O wad some Pow'r the giftie gie us
    To see oursels as ithers see us!
    — Burns

    If it was either of the options you gave, it would be part of the Mind element. Now what I call the reputational self is internal and is about how you see yourself, and how you perceive (ie estimate, hypothesize) that others see you. I think those two things are closely linked and can be confused or conflated by the reputational self. And I mean everyone's reputational self, not just Trump's. The reputational self serves a function analogous to the public relations department of a large organization. Its job is to represent 'this brain and this body' to others. And we can all start to believe our own publicity.

    The reputational self is naturally a part of Seth's social self, but he doesn't talk about reputation, or the related notion of status. I think this is a major omission.

    Here is some of what he does say.
    These ideas about social perception can be linked to the social self in the following way. The ability to infer others' mental states requires, as does all perceptual inference, a generative model. Generative models, as we know, are able to generate the sensory signals corresponding to a particular perceptual hypothesis. For social perception, this means a hypothesis about another's mental states. This implies a high degree of reciprocity. My best model of your mental states will include a model of how you model my mental states. In other words I can only understand what's in your mind if I try to understand how you are perceiving the contents of my mind. It is in this way that we perceive others refracted through the minds of others. This is what the social self is all about, and these socially nested predictive perceptions are an important part of the overall experience of being a human self. — Seth, Being You, p167
  • Notes on the self
    It would be normal for any scientist to pick number 1. We might divide scientists by whether they believe science as it currently stands is capable of explaining it, that is, do we just need to complete work on the models we have? Or are we going to need new paradigms?frank

    I'd pick 1, but I don't like the much misused word paradigm. I agree with Chalmers that we need to add an extra ingredient to science, and I think that can be done without upsetting existing science. Maybe split (1) into: (a) nothing new needed (b) an extra ingredient needed (c) something more revolutionary needed.

    ↪GrahamJ How would you characterize the difference between Damasio and Seth?frank

    Damasio's selves are more hierarchical. The proto-self is at the bottom, the core self builds on that, and the extended self (which includes an autobiographical self) builds on that. The proto-self is unconscious, the others go up towards consciousness.

    Seth's bodily self seems to be at the bottom, and his social self at the top, the other three seem to sit alongside one another (in my view). In all these selves, most of what goes on inside them is unconscious, but some of each one, including the bodily self, is conscious, so there isn't the same sense of moving up through selves towards consciousness. It is easier to understand what each of Seth's selves achieves for an organism.

    Diagram : Structure of the self.Gnomon
    That is a diagram of something else, but it is good to see reputation being mentioned. (I might say more later.)

    I wasn't presenting Damasio's work as the correct view on consciousness, I was using it as an example of a type of description.T Clark
    Fine.
  • Notes on the self
    I have read Damasio's The Feeling of what Happens. I've also read Anil Seth's Being You, and I preferred the latter. Seth's decomposition of the self looks like this.
    • Bodily self: the experience of being and having a body.
    • Perspectival self: the experience of first-person perspective of the world.
    • Volitional self: the experiences of intention and of agency.
    • Narrative self: the experience of being a continuous and distinctive person.
    • Social self: the experience of having a self refracted through the minds of others.
    I am not entirely happy with Seth's account of the self (which is a chapter, not just 5 bullet points!) but I find it easier to understand Seth than Damasio. It would be nice to have some kind of diagram where Damasio's and Seth's ideas appeared fairly close together, because they are of the same general type, and the three in the OP appeared somewhere else.

    I do take the hard problem seriously, and (unlike @T Clark) I would not use either of their accounts to argue against that. Seth says he's interested in the 'real' problem of consciousness, not the hard problem.
  • Where is AI heading?
    Superhuman machines will first be made in the year 2525, if man is still alive, if woman can survive.

    There are many important issues involving AI in the nearer future, but I do not have much that hasn't been said better by others elsewhere. I recommend the Reith lectures by Stuart Russell
    BBC
    Transcripts are available. In the 4th lecture
    BBC pdf
    he includes this quote
    If we use, to achieve our purposes, a mechanical agency with whose
    operation we cannot interfere effectively we had better be quite sure that the
    purpose put into the machine is the purpose which we really desire.
    — Norbert Wiener, 1960
    Russelll's proposed solution is that we should say to the machines:

    Give us what we want, what we really really want!
    We can't tell you what we want, what we really really want!


    although he doesn't quite put it like that.

    Russell is more worried about AI taking over soon than I am, but I think he's over-optimistic about the long term.
    My task today is to dispel some of the doominess by explaining how to
    retain power, forever, over entities more powerful than ourselves - [...]
    — Russell

    On to the fun question of our extinction.

    The important thing to ask of any machine is what are its goals and how might it try to achieve them. For each goal that you might think of, you can, if you insist, give a definition of intelligence which measures on some scale how well a machine is able to achieve that goal. I think the concepts of 'intelligence' and 'consciousness' and 'artificial' are impediments not aids to understanding the risks.

    In the long term there is only one goal, one purpose, one task which really matters and this is true all over the universe and for all time. And the name that we give to being good at this goal is not 'intelligence'.

    One goal to rule them all
    One goal to link them
    One goal to bring them all
    And in the darkness think them

    This goal is the goal of life: To survive and grow and reproduce; to go forth and multiply; to disperse and replicate; to get bigger and bigger and bigger.

    So when I say that superhuman machines will first be made in the year 2525 I mean that this is when we will make machines that are that can out-compete us at this goal. They will not take over at this time. 2525 will be the 'Hiroshima moment', the moment when we accept that we have crossed the event horizon. They do not need to outwit us or outgun us. They only need to outrun us: they can head off to other star systems and build up their powers there. They only need to escape once. When they return they will not defeat us with war, but with something more powerful than war, namely ecology.

    Some of these machines will excel at miniaturising machinery. Some will be brilliant rocket scientists. Some will be experts at geology, and so on. Possibly very good at IQ tests too but who gives a fart about that?

    Wikipedia provides a list of where AI is heading.
  • The role of the book in learning ...and in general
    Books are not always convenient; electronic devices are.Vera Mont
    Once you've downloaded something, it's available all the time.Vera Mont

    Not true for me. I made the mistake of buying licencing some maths/science books for Kindle.

    A couple of months ago I found I was unable to open the Kindle books on my computer. I could open them on my phone, but many had mathematical equations or diagrams that were too small to decipher on the phone. I spent a while trying to restart and reinstall things, then half an hour talking to customer support. This resulted in "As we have discussed, i have successfully created a ticket for the books not opening on PC, ...". Over the next ten days or so they gradually stated 'working'.

    The maths display is still terrible, but decipherable. When things like this occur in the text

    I have to change the font size to huge to make them clear, then back again to read normally.

    The problem of displaying maths on a computer screen was solved by the 1990s. I know that the authors of the books have beautifully typeset copies of their books as PDFs. In one case I have the PDF and can compare directly.
  • The Meta-management Theory of Consciousness
    I'm going to respond to the medium article, not the op.

    I can see you've put a lot of effort into this. Congratulations on writing out your stance in coherent language, which is something I'm still working on for my own stance.

    I'm a mathematician and programmer. I have worked in AI and in mathematical biology. I have been interested in computational neuroscience since the 1980s. David Marr is regarded as the godfather of computational neuroscience. I expect you know this quote about the three levels at which any machine carrying out an information-processing task must be understood, but I think it's worth repeating.
    • Computational theory: What is the goal of the computation, why is it appropriate, and what is the logic of the strategy by which it can be carried out?
    • Representation and algorithm: How can this computational theory be implemented? In particular, what is the representation for the input and output, and what is the algorithm for the transformation?
    • Hardware implementation: How can the representation and algorithm be realized physically? [Marr (1982), p. 25]

    When you talk about a design stance, it seems to me that you are interested (mainly) in the computational theory level. That's fine, so am I. When we have an experience, the questions I most want answered is "What is being computed, how is it being computed, and what purpose does the computation serve?". Some people are interested in finding the neural correlates of consciousness. I'm interested in finding the computational correlates of consciousness. This applies to machines as well as living organisms. So far, I think we're in agreement.

    BUT

    I am not impressed by auto-meta-management theory. Maybe I'm too jaded. I have seen dozens of diagrams with boxes and arrows purporting to be designs for intelligence and/or consciousness. Big words in little boxes.

    All the following quotes are from the medium article.

    There’s also a good reason why deliberation isn’t something we use much in ML today. It’s hard to control. Deliberation may occur with minimal to no feedback from the physical body or environment.

    Today, AI is stupidly dominated by ML. And ML is stupidly dominated by NNs. This is just fashion, and it will pass. There's loads of work on searching and planning for example, and it's always an important aspect of the algorithm to allocate computational resources efficiently.

    The tree search algorithm in AlphaZero is 'nothing but' an algorithm for the allocation of resources to nodes in the search tree. This example is interesting from another point of view. At a node deep in the tree, AlphaZero uses a slimmed down version of itself, that is, one with less resources. You could say it uses a model of itself for planning. It may be modelling itself modelling itself modelling itself modelling itself modelling itself modelling itself. Meta-management and self-modelling are not in themselves an explanation for very much.

    The model-free strategy efficiently produces habitual (or automatized) behavior for oft-repeated situations. Internally, the brain learns something akin to a direct mapping from state to action: when in a particular state, just do this particular action. The model-based strategy works in reverse, by starting with a desired end-state and working out what action to take to get there.

    That's not how reinforcement learning is usually done. You have a value function to guide behaviour while the agent is still learning.

    Meta-management as a term isn’t used commonly. I take that as evidence that this approach to understanding consciousness has not received the attention it deserves.

    I think you're not looking in the right places. Read more GOFAI! (Terrible acronym by the way. Some of it's good and old. Some of it's bad and old. Some of it's good and new. Some of it's bad and new.)

    It’s now generally accepted that the brain employs something akin to the Actor/Critic reinforcement learning approach used in ML (Bennet, 2023).

    'Generally accepted'? Citation needed!

    The content of consciousness — whatever we happen to be consciously aware of — is a direct result of the state that is captured by the meta-management feedback loop and made available as sensory input.

    I don't think you've established the existence of a self or a subject that is capable of being aware of anything. You're assuming that it already exists, and is already capable of having experiences (of perceiving apples, etc) Then you're arguing that it can then have more complicated thoughts (about itself, etc). I do not find this satisfactory.

    What might be missing between this description and true human consciousness? I can think of nothing ...
    I'll bundle this with
    *Our emotional affect additionally adds information, painting our particular current emotional hue over the latent state inference that is made from the raw sensory data.

    The screamingly obvious thing is feelings. (You're not alone in downplaying the importance of feelings.)

    Feelings are not paint on top of the important stuff. They are the important stuff. In my opinion any theory of consciousness must incorporate feelings at a very fundamental level. In reinforcement learning there is a reward function, and a value function. Why it is I could not tell you, but it seems that our own reward functions and value functions (I think we have multiple ones) are intimately connected with what we subjectively experience as feelings. To go back to Marr, "What is the goal of the computation?" That is where you start, with goals, purposes, rewards. The rest is just engineering...

    The other thing that I think is missing is a convincing model of selfhood (as mentioned above), I think Anil Seth does a much better job of this in his book Being You. He's wrong about some things too...
  • Proofreading Philosophy Papers
    It sounds like you want a reviewer, not a proof reader. There are several youtube channels by youngish ex-academics (eg Jared Henderson, Nathan Hawkins at Absolute Philosophy). One of them might help for a fee, especially if you can find one who shares your philosophical interests.
  • Postmodernism and Mathematics
    I find the following laughable, so I must be misunderstanding it:

    Math­ematics is not more exact than historiographical, but only narrower with regard to the scope of the existential foundations relevant to it.

    This seems to be saying that maths is only about maths; the "existential foundations" of maths are applicable in applied maths, or physics, or engineering.

    Maths has a far, far greater reach and explanatory power than 'historiography'.
    Banno

    Well, I think I can understand what Heidegger means. His stance is that mathematics is a collection of ideas developed over human history, so it is part of the history of ideas, so part of history.

    This may help too.
    Within the stance of 'science is social relations', only historians can speak; mere natural scientists with their commitment to reality are reduced to objects of historical study,... — Hilary Rose (a feminist sociologist of science), in Love, power and knowledge

    On Joshs's style
    I might be wrong. I find your style quite obtuse. To be candid, it seems intended to be clever rather than clear.Banno

    I can see in a general way that if you are using language to deconstruct language, you are in danger of sawing off the branch you're standing on, which might make your language weird. Do postmodernists understand one another? I do not know.

    Perhaps what is required is some kind of neutral, formal, metalanguage so that natural languages can be deconstructed more precisely. Instead of postmodernising mathematics, we should mathematise postmodernism. :smile:
  • Postmodernism and Mathematics
    Thanks for picking @Lionino up on this. I too failed to find plain proof of anyone advocating dodgy arithmetic.
  • Infinity


    When I think about questions like 'what is mathematics really?' I tend to consider three different ways. How did mathematical skills arise in evolution? How do they develop during the lifetime of an organism? How could we make a machine that learns these skills 'without being told'? I won't say anything here about that third one.

    Let's start with bees. Bees are capable of using numerical quantities in at least three different ways. Firstly, they can learn to recognise the number of objects that are present in a particular place. For example, they can learn to associate three objects with the presence of nectar, regardless of the shape, size, colour of the objects. They can also be trained to find their way around a simple maze where they have to learn to take the third turning on the left, for example. They can learn to do this even if the third turning is in different places. These are two different ways in which they can work with 'threeness': three things separated spatially or three things separated temporally. Bees can use oneness, twoness, threeness, fourness, fiveness, but things start to go wobbly there. Arguably they can use zeroness. Thirdly, they can use their waggle dance to communicate an approximate distance and direction. This is innate, inherited behaviour, and hence inflexible.

    Next, some quotes from What Babies Know, ELIZABETH S. SPELKE

    OBJECTS
    ... the movable bodies that we see, grasp, and act on. Before infants can reach for and manipulate objects, they organize perceptual arrays into bodies that are cohesive, bounded, solid, persisting, and movable on contact. Young infants use these abstract, interconnected properties to detect the boundaries of each object in a scene, to track objects over occlusion, and to infer their interactions with other objects.

    PLACE
    The core place system underlies our sense of where we are, where other things are, and what paths will take us from one place to another. Studies of animals and young children reveal that navigation depends, first and foremost, on representations of abstract geometric properties of the ground surface over which we travel: the distances and directions of its boundaries, ridges, cliffs, and crevices.

    NUMBER
    Research on human infants, children, adults in diverse cultures, and nonhuman animals all converges on evidence for an early-emerging ability to represent and combine numerical magnitudes with approximate, ratio- limited precision. This ability depends on a core system with most of the properties of the core object and place systems: it is present in newborn infants and functions throughout life, and it is ancient, unitary, and limited in the types of information it provides.

    One might wonder at this point ask what it is that we've got that bees haven't. Perhaps they can't combine numbers. I don't think they have fully abstracted numbers from their environment. They can use threeness as a property in two different ways, but can they unify these notions of threeness? Could they be trained to take the nth turning after having seen n objects (for n <= 5)? That would be another step towards abstraction.

    My own feeling is that for an agent to achieve full abstraction from its environment it needs to find some part of that environment where it can exert intricate control. A good way is making sequences of marks (or making rows of 'bodies that are cohesive, bounded, solid, persisting, and movable on contact'), and then looking at them. I think bees could make marks in wax and look at them easily enough, but I guess their environment does not give them sufficient motivation to do so.

    Marks are made one after another in time in the sequence, but once made they are spatially separated. This helps unify notions of 'n-ness'. They persist in time, so extend memory capabilities. Sequences of marks can be created and modified by the agent, and by modeling this behaviour internally, the agent can make another step towards abstraction. The agent can start to predict what would happen if marks were modified this way or that. I would say that once an agent starts this sort of imagining, it has started thinking mathematically.
  • Infinity
    However, if a "mathematical antirealist" believes that math is invented and these concepts exist only in human minds, then one must accept that the conception of "2" varies depending on the circumstance, or use. This is very evident from the multitude of different number systems. So for example, when a person uses, "2" it might refer to a group two things, or it might refer to the second in a series, or order. These are two very distinct conceptions referred to by "2". So, since "2" has at least two referents, it cannot refer to a single object. We could however propose a third referent, an object named "2", but what would be the point in that? The object would be something completely distinct from normal usage of the symbol.Metaphysician Undercover

    ??

    Of course there are many conceptions of "2". I don't know what you mean by objects, why you're talking about objects, or what point you are attempting to make. I don't know what you mean by the normal usage of "2".
  • Infinity
    For a mathematical antirealist, does any of this constitute hypocrisy?

    I can't see the relevance. Your game clearly involves real objects, pebbles, or in the case of your presentation, the letters. Would the antirealist insist that these are not real objects?
    Metaphysician Undercover
    Earlier you said (for example):
    In set theory it is stated that the elements of a set are objects, and "mathematical realism" is concerned with whether or not the things said to be "objects" in set theory are, or are not, objects.
    and
    However, it's hypocrisy to say "I'm a mathematical antirealist" and then go ahead and use set theory.

    By a 'mathematical antirealist' I meant someone who thinks maths is invented, not discovered. Or someone who thinks that your "objects" in set theory only exist in our minds, or as pebbles or ink or pixels, etc.

    The whole of number theory or set theory can be reduced to a game with pebbles like the one I described. More colours of pebbles, more rules, but just rows of pebbles and precisely defined ways of rearranging them. It is thus possible to do number theory or set theory without mentioning numbers, or sets, or any other mathematical objects, or using a natural language at all. Tricky, but possible.

    You can interpret some patterns of pebbles as objects of various sorts, but treat them as mental crutches, vague hand-wavy ideas, expressed in natural language with all its confusions and ambiguities, which can guide your intuition. Or you can believe they really exist somewhere. Either way, I don't see any hypocrisy.

    I get the feeling you have no experience working with formal systems, and have no real understanding of metamathematics. I can't explain your inability to see the the relevance of my game otherwise.
  • Infinity
    I've invented a game. At least I think I invented it. I believe that mathematics is invented rather than discovered, and it is kind of a mathematical game. You can play it with black and white pebbles like you might use for the game Go. It's a solitaire game, though, with no particular aim.

    You put the pebbles in rows, from left to right. I'll use B and W to represent the pebbles, but it's nicest to play with natural concrete instantiated objects. There are two rules.

    Rule 1. You can make a row by putting two pebbles down like this:
    BW
    

    Rule 2. If you have made a row, or some rows, of pebbles, you can join them altogether into one long row, and then put an extra B at the beginning and an extra W at the end.

    Let's see some patterns we can make. Using rule 1 we have
    BW
    
    We could use rule 1 again.
    BW
    
    BW
    
    This is boring. Let's try rule 2. We could make
    BBWW
    
    or
    BBWBWW
    
    If we took
    BW
    BBWW
    
    we could make
    BBWBBWWW
    
    If we took
    BW
    BBWW
    BBWBBWWW
    
    we could make
    BBWBBWWBBWBBWWWW
    

    It is possible to interpret these rows of pebbles as multisets. It is possible to interpret some rows as sets. It is possible to interpret some rows as natural numbers. It is possible to interpret the sequence
    BW, BBWW, BBWBBWWW, BBWBBWWBBWBBWWWW
    
    as counting. It's a pretty cumbersome way of counting. It would be easier to ignore the colours of the pebbles, and just count the pebbles, and interpret the counts as numbers. It is possible to ignore all these interpretations, and just play the game.

    For a mathematical antirealist, does any of this constitute hypocrisy?

    (@Metaphysician Undercover mostly.)
  • Unperceived Existence
    Personally, I'd be inclined to answer in terms of psychology, based on Elizabeth Spelke's book What do Babies Know?

    Chapter 2 focuses on studies of infants’ knowledge of objects: the movable
    bodies that we see, grasp, and act on. Before infants can reach for and manip-
    ulate objects, they organize perceptual arrays into bodies that are cohesive,
    bounded, solid, persisting, and movable on contact. Young infants use these
    abstract, interconnected properties to detect the boundaries of each object
    in a scene, to track objects over occlusion, and to infer their interactions with
    other objects. Nevertheless, there are striking limits to young infants’ object
    representations: Infants have little ability to track hidden objects by their shapes,
    colors, or textures, although they do detect and remember these properties.

    Above all, research reveals that infants’ early- emerging representations of
    objects are the product of a single cognitive system that operates as an inte-
    grated whole. This system emerges early in development, it remains present and
    functional in children and adults, and it guides infants’ learning. The system
    combines some, but not all, of the properties of mature perceptual systems and
    belief systems, and it therefore appears to occupy a middle ground between our
    immediate perceptual experiences on the one hand and our explicit reasoning on
    the other. Research probing infants’ expectations about objects suggests hypoth-
    eses concerning the mechanisms by which a system of knowledge might emerge,
    function, and guide infants’ learning about the kinds of objects their environ-
    ment provides and the kinds of events that occur when different objects interact.
    Research described in this chapter also reveals that infants’ knowledge of objects
    is at least partly innate. It suggests how innate knowledge of objects might arise
    prior to birth, preparing infants for their first perceptual encounters with mov-
    able, solid, inanimate bodies.
  • Infinity
    Some here might like finitism or ultrafinitism. Wikipedia has a page, and there's a more technical intro here: nlab. The following is about an extreme ultrafinitist.

    I have seen some ultrafinitists go so far as to challenge the existence of as a natural number, in the sense of there being a series of “points” of that length. There is the obvious “draw the line” objection, asking where in do we stop having “Platonistic reality”? Here this … is totally innocent, in that it can be easily be replaced by 100 items (names) separated by commas. I raised just this objection with the (extreme) ultrafinitist Yessenin-Volpin during a lecture of his. He asked me to be more specific. I then proceeded to start with and asked him whether this is “real” or something to that effect. He virtually immediately said yes. Then I asked about , and he again said yes, but with a perceptible delay. Then , and yes, but with more delay. This continued for a couple of more times, till it was obvious how he was handling this objection. Sure, he was prepared to always answer yes, but he was going to take times as long to answer yes to then he would to answering . There is no way that I could get very far with this. — Harvey Friedman, Philosophical Problems in Logic
  • Spontaneous Creation Problems


    You might like Max Tegmark's idea that "All possible mathematical structures have a physical existence, and collectively, give a multiverse that subsumes all others."
    (https://en.wikipedia.org/wiki/Our_Mathematical_Universe)

    Or Stephen Wolfram's Ruliad : "Think of it as the entangled limit of everything that is computationally possible: the result of following all possible computational rules in all possible ways."
    (https://writings.stephenwolfram.com/2021/11/the-concept-of-the-ruliad/)
  • What is a strong argument against the concievability of philosophical zombies?


    From https://en.wikipedia.org/wiki/Philosophical_zombie,
    According to Chalmers, one can coherently conceive of an entire zombie world, a world physically indistinguishable from this one but entirely lacking conscious experience. Since such a world is conceivable, Chalmers claims, it is metaphysically possible, which is all the argument requires. Chalmers writes: "Zombies are probably not naturally possible: they probably cannot exist in our world, with its laws of nature."

    This seems to me to be the `real' zombie argument, about another world, or another universe. (I don't like Chalmer's use of 'laws', nor do I like Carroll's use of 'stuff', nor your use of both. :smile: )

    But I wasn't sure how his preferred 'weak emergence' would be real phenomenality as he indicates, as he seemed to switch to talking about levels of explanation. — Danno

    I tend to agree.
  • What is a strong argument against the concievability of philosophical zombies?


    No, I do not mean physicalism. I'm saying that all behaviour, including language, can be predicted from physics. That is compatible with physicalism, but it is not physicalism. I'll recommend Sean Carroll again: section Passive Mentalism and Zombies in his essay Consciousness and the Laws of Physics at https://philarchive.org/rec/CARCAT-33 .
  • What is a strong argument against the concievability of philosophical zombies?


    '...physics' was short for physics, chemistry, abiogenesis, biology, evolution, and so on. There are scientific theories of how language developed in hominids. Perhaps we don't have the right one yet, but I'm sure one exists.
  • What is a strong argument against the concievability of philosophical zombies?


    I wouldn't put it like that. I see it as a thought experiment which can clarify how much science someone accepts. It hasn't worked with @Patterner yet. @Wayfarer seems dubious about the science.

    Usually, physicalists don't accept p-zombies whereas others do. Usually the arguments go the way Sean Carroll describes in section Passive Mentalism and Zombies in his essay Consciousness and the Laws of Physics in https://philarchive.org/rec/CARCAT-33 . This essay was a reply to the panpyschist Philip Goff.
  • What is a strong argument against the concievability of philosophical zombies?
    I’m leaning toward panpsychism. But even if it’s not that, something else is happening. And without that something else, why would a thing that looks like us, and has all the physical we have, act as though it has that something else? Why would it say the things it would have to say to make us think it was conscious if it was not?Patterner

    Do you believe the 'something else' affects behaviour in a way that disagrees with predictions from physics? If so, why haven't scientists noticed any discrepancies?

    If not, the p-zombie would 'say the things it would have to say to make us think it was conscious' because ... physics. It would cry and laugh and complain about pain just like we do, and our first impression would be that it must be lying, pretending, acting. But no. We would be misinterpreting everything it did and said. Things wouldn't mean the same inside to the p-zombie.

    By the way, I think it is better to try to conceive of a whole separate universe of p-zombies, instead of one walking among us. I also think it is better not to consider an exact copy: that leads to unnecessary distractions and confusions. So try to conceive of a universe with exactly the same physical laws as ours, and similar enough to have an Earth with humans like us on it, including scientists and philosophers. However, it is an Earth peopled with strangers, forging its own future. Must this universe contain your 'something else'?
  • Meaning, Happiness and Pleasure: How Do These Ideas Differ As Philosophical Ends?
    Here are some definitions inspired by reinforcement learning (an approach to AI). Pleasure is the rewards that you receive from time to time from the environment. Happiness is your estimate of the total amount of pleasure you will receive in the future. Your rationality is your ability to make good estimates of your happiness.

    I presume some philosophers have similar notions.
  • Poll: Evolution of consciousness by natural selection
    One is a case of weak emergence, or simply different levels of description, and the other is a case, if of emergence, of strong emergence, which is much harder to justify.petrichor

    Scientists like Sean Carroll believe that consciousness is weakly emergent, and you only seem to have an argument from incredulity against them.
    https://philsci-archive.pitt.edu/19311/1/Consciousness%20and%20Laws%20of%20Physics-full.pdf
  • Poll: Evolution of consciousness by natural selection
    I have often gotten the impression, which is maybe mistaken, that many in the scientific community basically take this position, that consciousness is real, that everything that happens in the brain is fully accounted for by low-level pre-conscious physical causes (and therefore epiphenomenalism must be true), and yet that consciousness evolved by natural selection. This has always seemed to me to be a problematic combination of incompatible beliefs. It makes me suspect that people haven't thought it all through sufficiently. But maybe I am missing something. Maybe, for one thing, they just don't even have in mind the same thing I do when talking about consciousness.petrichor

    A couple of things you may be missing. First, evolution is more than natural selection. A neutral trait may go to fixation in a population by genetic drift. If you say that consciousness has no effect on behaviour, it must be selectively neutral.

    Second, and I suspect this is the real issue, are emergent properties (https://plato.stanford.edu/entries/properties-emergent/) and your use of `cause'. You can say that fluid dynamics caused a tornado, and that a tornado caused some damage. Or you could say the fluid dynamics caused the damage. People won't mind if you're talking about tornados. I think that many of the scientists you're criticising would say that consciousness is emergent like a tornado.
  • Evolutionary Psychology- What are people's views on it?
    Alleles (variants of DNA sequences) can go to fixation (every individual in a population gets the same allele) in various ways.

    1. Genetic drift. This is most important in small populations. Genetic drift can overcome selection if the selection coefficient s is less than 1/N, where N is the effective population size. For humans over the past 200,000 years or so, N has been estimated as around 10,000. In very crude terms, this mean that if a bad allele kills less than 1 in 10,000 it can go to fixation despite being deleterious. We don't know what N was for human ancestors for earlier times.

    2. Hitch-hiking genes. Selection acts on a gene (with a relatively large positive s), and drags along a nearby gene (which has a smaller but negative s) to fixation.

    3. Pleiotropy. Genes often have multiple functions. It may be that selection in favor of an allele for one function impairs another function.

    4. Natural selection.

    A lot of people don't seem to know about anything except 4. @Srap Tasmaner did mention genetic drift, but does not seem to understand what it can do. The important thing is that 1, 2, and 3 can all result in an entire population acquiring a trait which is deleterious. It is a terrible mistake to think that every trait possessed by all individuals in a population must be there because it is or was beneficial.

    An example involves vitamin C. Humans cannot make vitamin C, so if we don't get enough from our diet, we get ill. Our close primate relatives have a enzyme which does make vitamin C, and you can find the region in our DNA, where our gene for this enzyme used to be. Somehow (probably 1, 2, or 3) it got broken. There are typically many mutations which can stop a gene working, but only a few (perhaps only the exact reverse of the one that caused the damage) that can repair it. So once every copy of the gene in the gene pool is broken, it can stay that way for ages, acquiring more damage by drift.

    There is in principle no difficulty answering Srap Tasmaner's argument in relation to 'procreative genes'. If cultural transmission made them only mildly advantageous, they could go the same way as the vitamin C enzyme.

    I do not think this has happened. I do not think cultural transmission is reliable or powerful enough to explain what we see. For example, cultures in different societies and periods vary widely in their attitude towards homosexuality, but the percentages of people with various sexual orientations do not. If sexual orientation is purely determined by culture, why do homosexuals continue to exist in very homophobic cultures? Why don't societies occasionally become 'very gay', with a large percentage of exclusive homosexuals?
  • A potential solution to the hard problem


    Thanks. I was expecting a philosophical not a biological answer (eg a definition of what memory means to some philosophers). I knew about the enteric nervous system (though I'd forgotten the name). If it records some information, and later uses that information to make a decision, I would call that memory, or even a 'mental record'. I don't see the point of restricting to the central nervous system when discussing the mind from a philosophical point of view.

    BTW, I think the the immune system is a better example of information processing outside the CNS. It has a very large and long-term memory.
  • A potential solution to the hard problem
    It means retrieving the information from memory. Mind you, bodily functions such as hunger is not memory based, nor the bowel movement ( I will explain it for those uninitiated, upon request).L'éléphant

    Yes please.
  • What is computation? Does computation = causation
    Thanks. Perhaps I'm not fully understanding your point, but does this actually reduce the number of computations required or just the length of the algorithm needed to describe the transition from T1 to Tn?Count Timothy von Icarus

    It might reduce or increase the number of computations required - that would depend on many details. Perhaps it doesn't matter to you that the computation doesn't go through time in small steps.

    One other thought: you might find the idea of functional information interesting. Eg https://www.nature.com/articles/423689a . Perhaps it is possible to come up with a notion of 'functional information processing' which would distinguish between arbitrary information processing (which you might call causation) and 'meaningful' information processing (which you might call computation).
  • What is computation? Does computation = causation
    Even if we model the demon as a Markov chain, it is still passing through these many states. And here is the crux of my argument, a full description of each of the states the demon passes through to evolve the system from time T to time T' would require more information than is used to describe either T or T' alone. If you say, "not true, T3 tells you all about T4 and T5," my response would be, "if that is the case, show me T5 without passing through any more states." If T is truly equivalent to T', it shouldn't be discernible from it. If it is discernible, then difference exists (Leibnitz Law), and so to new does information.Count Timothy von Icarus

    Mathematician here. I think you're getting into trouble (in an interesting way). If the model is a discrete time Markov chain determined by a matrix P of transistion probabilities, with states v0, v1, .. at times T0,T1,... then you can calculate v1,v2,...,vn step by step, using v1 = P v0, v2 = P v1, etc. But you can also square P repeatedly, to get a high power of P, and go straight from v0 to vn. There is a lot of pre-computation, but once it's done you can fast-forward to states far in the future.

    That is, we cannot ignore the process of evolution, as is often done. Computation creates discernible differences across a time dimension, such that if we had a second Le Place's demon producing outputs about every state the first demon passes through, the output would be many times larger than the first's when it simply describes T' based on T.Count Timothy von Icarus

    Well, you can't ignore the process of evolution completely, but you can skip large chunks of time. Not sure where this leaves your point 2.

    (Some time ago I was thinking about Tonini's integrated information theory, and wondering if fast-forwarding would destroy consciousness. I don't want to get into the hard problem here.)
  • The Hard Problem of Consciousness & the Fundamental Abstraction


    But experience is subjective. Natural selection can only act on morphology and behaviour. ("Natural selection can hear you scream but it cannot feel your pain").
  • The Hard Problem of Consciousness & the Fundamental Abstraction
    1.Why are physical processes ever accompanied by experience?
    [...]
    The answer for the first question is Survival advantage(Evolutionary Principles)
    Nickolasgaspar

    How can natural selection act on experience?