• Harry Hindu
    5.1k
    However, as far as my argument is concerned, "thinking" means everything that occurs in the brain, whether our consciousness is aware of it or not.TheMadFool

    Then the mind "thinking" how to catch a ball is the same as the brain "performing" mathematical calculations?
  • TheMadFool
    13.8k
    Then the mind "thinking" how to catch a ball is the same as the brain "performing" mathematical calculations?Harry Hindu

    To begin with, I want to avoid discussions on mind if meant as some form of immaterial object different to the brain. I will use "mind" here only to the brain and what it does.

    Nevertheless, the complexity of the brain necessitates a distinction - that between higher consciousness and lower consciousness/the subconscious. The former refers to that part of the mind that can make something an object of thought. What do I mean by that? Simply that higher consciousness can make something an object of analysis or rational study or even just entertain a simple thought on it. Our higher consciousness is active in this discussion between us, for example.

    By lower consciousness/the subconsciousness I mean that part of the mind that doesn't or possibly can't make things an object of rational inquiry or analysis. While sometimes this part of the mind can be accessed by the higher consciousness, the subconscious is for the most part hidden from view. As an example, our typing the text of this discussion is the work of the lower consciousness/the subconscious. The higher consciousness doesn't decide which muscles to contract and which to relax and calculate the force and direction of my fingers in typing our text. Rather the lower consciousness/the subconscious carries out this activity.

    So, what you mean by performance is being carried out by the lower consciousness/the subconsciousness and this friendly exchange of ideas between us is the work of our higher consciousness. The difference in opinion we have is that for me both higher and lower consciousness is thinking but for you thinking seems to apply only to higher consciousness.
  • Cabbage Farmer
    301
    When an object is thrown at me, and I hope I'm representative of the average human, I make an estimate of the trajectory of the object and its velocity and move my body and arm accordingly to catch that object. All this mental processing occurs without resorting to actual mathematical calculations of the relevant parameters that have a direct bearing on my success in catching thrown objects.TheMadFool
    I would not say I ordinarily estimate the trajectory and velocity. I look and catch, look and throw.

    Something in me does estimate the trajectory and velocity when I look and catch and throw. I expect you are right to suggest that this process is largely subconscious and in a sense involuntary or automatic, once I set myself to a game of catch. But the process manifests in my awareness as a feature of the cooperation of my perception, motion, expectation, and intention.

    I expect you are right to suggest that this process of anticipation in us does not ordinarily involve mathematical calculation.

    The other possibility is that we don't need mathematics to catch a ball and roboticists are barking up the wrong tree. Roboticists need to rethink their approach to the subject in a fundamental way. This seems, prima facie, like telling a philosopher that logic is no good. Preposterous! However, to deny this possibility is to ignore a very basic fact - humans don't do mathematical calculations when we play throw and catch, at least not consciously.TheMadFool
    Isn't it the job of the robot designers to design robots that perform certain actions, like drilling or catching? What does it matter whether the processes involved are the same as the processes in us? How could they be exactly the same sort of processes?

    It's not clear to me what claim are you objecting to.
  • Harry Hindu
    5.1k
    So, what you mean by performance is being carried out by the lower consciousness/the subconsciousness and this friendly exchange of ideas between us is the work of our higher consciousness. The difference in opinion we have is that for me both higher and lower consciousness is thinking but for you thinking seems to apply only to higher consciousness.TheMadFool

    No, I'm trying to clarify what you are really asking in your OP. I asked (I wasn't asserting anything) if the brain and mind were doing the same thing, but if we are using different terms to refer to the same thing - thinking and performing - and the terms have to do with different vantage points - from within your own brain (your mind thinking) or from outside of it (someone else looking at your mind and seeing a brain performing mathematical calculations).

    The higher consciousness doesn't decide which muscles to contract and which to relax and calculate the force and direction of my fingers in typing our text. Rather the lower consciousness/the subconscious carries out this activity.TheMadFool
    When you are learning how to do these things for the first time, you are applying your "higher" consciousness. For instance, learning to ride your bike requires conscious effort. After you have enough practice, you can do it without focusing your consciousness on it.

    So, if the higher level passes the work down to the lower level, what exactly is it passing down - mathematical calculations? Thoughts? What? At what point does the brain pass it down to the lower level - what tells the brain, "Okay, the lower level can take over now"? How does the brain make that distinction?

    Nevertheless, the complexity of the brain necessitates a distinction - that between higher consciousness and lower consciousness/the subconscious. The former refers to that part of the mind that can make something an object of thought. What do I mean by that? Simply that higher consciousness can make something an object of analysis or rational study or even just entertain a simple thought on it. Our higher consciousness is active in this discussion between us, for example.TheMadFool
    So, is the subconscious just an object of thought in the higher level, or is it really a "material" object in the world independent of it being objectified by the higher level? You seem to be saying that brains and the subconscious are objects before being objectified by the higher level. If they are already objects in the material world, then why does the higher level of the brain need to objectify those things? What would it mean for the "higher" level to objectify what is already an object?
  • TheMadFool
    13.8k
    No, I'm trying to clarify what you are really asking in your OP. I asked (I wasn't asserting anything) if the brain and mind were doing the same thing, but if we are using different terms to refer to the same thing - thinking and performing - and the terms have to do with different vantage points - from within your own brain (your mind thinking) or from outside of it (someone else looking at your mind and seeing a brain performing mathematical calculations).Harry Hindu

    Either your English is too good or my English is too bad :rofl: because I can't see the relevance of the above to my position. Either you need to dumb it down for me or I have to take English clases. I'm unsure which of the two is easier.

    When you are learning how to do these things for the first time, you are applying your "higher" consciousness. For instance, learning to ride your bike requires conscious effort. After you have enough practice, you can do it without focusing your consciousness on it.Harry Hindu

    You're in the ballpark on this one. The only issue I have is I don't see the involvement of higher consciousness in learning to ride a bike in the sense that your consciousness is directly involved in deciding which muscles to contract and which to relax and how much force each muscle should exert.

    If you ask me, though physical ability is too generic in the sense that it basically involves control of a few basic body structures like head, neck, limbs and torso, the process of learning a skill that isn't part of "normal" physical activity e.g. riding a bike, or juggling, etc, appears to be a bottom up process; in other words, what is actually happenning is that the subconscious is feeding the higher consciousness cues as to how your limbs and torso must move in order to ride a bike.

    It isn't the case that the higher consciousness is passing on information to the subconscious in learning to ride a bike; to the contrary the subconscious is adjusting itself to the dynamics of the bike. I've stopped riding bikes but I remember making jerky, automatic i.e. not conscious and voluntary, movements with my arms, legs and torso. The bottomline: the role of the higher consciousness begins and ends with the desire to ride a bike.

    So, is the subconscious just an object of thought in the higher level, or is it really a "material" object in the world independent of it being objectified by the higher level? You seem to be saying that brains and the subconscious are objects before being objectified by the higher level. If they are already objects in the material world, then why does the higher level of the brain need to objectify those things? What would it mean for the "higher" level to objectify what is already an object?Harry Hindu

    Forget that I said that. I wanted to make a distinction between the part of the "mind" (brain function) that generates intentions with respect to our bodies and the part of the mind that carries out those intentions. As an example, as I write these words, I'm intending to do so and that part of the mind is not the same as the part of the mind that actually moves my fingers on the keyboard. For me what both these parts do amounts to thinking.

    You said that I was conflating thinking with performing but that would imply the former doesn't involve the latter but as I explained above I consider both the intent to do a physical act and the performance of that act both as thinking.
  • Harry Hindu
    5.1k
    You're in the ballpark on this one. The only issue I have is I don't see the involvement of higher consciousness in learning to ride a bike in the sense that your consciousness is directly involved in deciding which muscles to contract and which to relax and how much force each muscle should exert.TheMadFool

    How did you learn to ride a bike? What type of thoughts were involved? Didn't you have to focus on your balance, which in fine control over certain muscles that you might or might not have used before? What about ice-skating which uses muscles most people haven't used (in your ankles) that have never ice-skated before.

    The reason you say that this "physical" act is done by some other part of the brain is because you've already passed the performance to another part of the brain. Using your theory, learning to ride a bike would be no different than knowing how to ride a bike. Learning requires the conscious effort of controlling the body to perform functions the body hasn't performed before. Once you've learned it, it seems to no longer require conscious effort to control the body. Practice creates habits. Habits are performed subconsciously.

    I think part of your problem is this use of terms like "material" vs. "immaterial" and "physical" vs. "mind". You might have noticed that haven't use those terms except to try to understand your use of them, which is incoherent.

    What is "intent" at the neurological level? Where is "intent" in the brain?
  • Harry Hindu
    5.1k
    No, I'm trying to clarify what you are really asking in your OP. I asked (I wasn't asserting anything) if the brain and mind were doing the same thing, but if we are using different terms to refer to the same thing - thinking and performing - and the terms have to do with different vantage points - from within your own brain (your mind thinking) or from outside of it (someone else looking at your mind and seeing a brain performing mathematical calculations).
    — Harry Hindu

    Either your English is too good or my English is too bad :rofl: because I can't see the relevance of the above to my position. Either you need to dumb it down for me or I have to take English clases. I'm unsure which of the two is easier.
    TheMadFool
    Why is it that when I look at you, I see a body with a brain, not your mind? If I wanted to find evidence of your mind, or your intent, where would I look? Would I see what you see? If I see a brain causing the body to perform actions, and you experience intent causing your body to perform actions, why the difference?

    Forget that I said that. I wanted to make a distinction between the part of the "mind" (brain function) that generates intentions with respect to our bodies and the part of the mind that carries out those intentions.TheMadFool
    Are you saying that your intent moves your brain into action? How is that done? Forgetting you said that is forgetting how your position is incoherent.
  • TheMadFool
    13.8k
    How did you learn to ride a bike? What type of thoughts were involved? Didn't you have to focus on your balance, which in fine control over certain muscles that you might or might not have used before? What about ice-skating which uses muscles most people haven't used (in your ankles) that have never ice-skated before.

    The reason you say that this "physical" act is done by some other part of the brain is because you've already passed the performance to another part of the brain. Using your theory, learning to ride a bike would be no different than knowing how to ride a bike. Learning requires the conscious effort of controlling the body to perform functions the body hasn't performed before. Once you've learned it, it seems to no longer require conscious effort to control the body. Practice creates habits. Habits are performed subconsciously.

    I think part of your problem is this use of terms like "material" vs. "immaterial" and "physical" vs. "mind". You might have noticed that haven't use those terms except to try to understand your use of them, which is incoherent.

    What is "intent" at the neurological level? Where is "intent" in the brain?
    Harry Hindu

    Headless Chicken

    File%3AMikeTheHeadlessChicken.jpg

    The part of our mind that generates intent isn't necessary for carrying out physical activities. If the chicken can walk without a head then surely learning to ride a bike, which is nothing more than glorified walking, can be done without the intent-generating part of the brain, completely at the level of the subconscious.

    Thanks for the information VagabondSpectre.
  • Harry Hindu
    5.1k
    The part of our mind that generates intent isn't necessary for carrying out physical activities. If the chicken can walk without a head then surely learning to ride a bike, which is nothing more than glorified walking, can be done without the intent-generating part of the brain, completely at the level of the subconscious.TheMadFool
    You're avoiding the questions and this post doesn't address any of the points I have made.

    All you have done is provide an explanation as to why the part of our mind that generates intent isn't necessary. Then why does it exist? What is "intent" for? You're the one that proposed this "intent" in our minds and now you're saying it isn't necessary.

    I wonder, if a child lost it's head before learning how to walk, if the child would be able to walk after losing it's head? Why or why not?
  • VagabondSpectre
    1.9k
    I wonder, if a child lost it's head before learning how to walk, if the child would be able to walk after losing it's head? Why or why not?Harry Hindu

    ?u=https%3A%2F%2Fmedia.giphy.com%2Fmedia%2Fi7RGfLvttaxTa%2Fgiphy.gif&f=1&nofb=1

    For humans the answer is almost definitely no, simply because we're super complicated. When children first learn to stand up, balance, and walk, they're doing it with conscious effort, and overtime they get better and better at doing this. We have more fine-control requirements than something like a chicken (it has a much lower center of gravity (and therefore more balance-plausible positions), including bigger feet and fewer muscles). I know that in adults balance can be handled subconsciously by central pattern generators (look at a time lapse video of someone standing "still", they actually sway back and forth constantly, which is caused by pattern generators correcting the skeletal muscles (like those in the hips and back) which directly affect balance (e.g: if your actual balance perception or your force/position sensing afferent neurons indicates a problem happening in a given direction, then a reflex can issue a correcting force until the problem is marginalized)).

    For chickens, there is still early learning that must take place. The 'pecking' motion that some birds (but especially baby chickens) do is actually an automatic hard-wired reflex that gets triggered by certain stimulus (like looking down with the right body position, and seeing something shiny). However the chick doesn't know what its doing at first, or why; it just thrusts its beak randomly toward the ground. And once it manages to snatch something tasty (like a bug or grain), it can start to optimize the pecking motion to get more rewards.

    The nursing/suckling motion performed by the the jaw muscles of human babies is another example. This is definitely a central pattern generator circuit being activated automatically (just by the touch receptors near the lips), but initially the baby has no clue what it is doing and cannot nurse effectively. It learns to adapt and modify the movements quite quickly (breast milk must be the bee's knees for a newborn), and this is what enables it to quickly learn the somewhat complicated and refined motor control for nursing. The same pattern generators probably get us babbling later on, and then are used for speaking more distantly.

    Likewise with baby deer, they can stand up almost instantaneously,but not actually instantaneously. They still have to get the hang of it with some brief practice.

    The hard wired starting patterns we have from birth are like very crude ball-park/approximate initial settings that basically get us started in useful directions.

    CONTENT WARNING: Disturbing

    This is video of a decerebrated cat (its cerebrum has been disconnected from its body, or damaged to the point of non-activity).

    Reveal


    It is being suspended over a treadmill so that its legs are touching the substrate. As can be seen in the video, without any central nervous modulation whatsoever, the still-living peripheral nervous system can still trigger and modulate the central pattern generators in the spine to display a range of gaits.

    Whether or not this would work with a kitten is the experiment needed to start estimating how much learning takes place in the spine and brain-stem itself throughout the lifespan of the animal (is it the higher brain that improves it control over the body, or do the peripheral body systems adapt overtime to better suit the on-going needs of the CNS (or, what combination of both?). Increasing the default coupling strength or activation thresholds of different neurons in these central pattern generator circuits can effectively do this). How much motor control refinement takes place, and where it takes place, certainly differs between very different species of animal. (insects, fish, quadrupeds, bipeds, etc...)

    In summary, different bodies have different demands and constraints when it comes to motion control strategies. Centipedes don't need to worry to much about balance, but the timing between their legs must remain somewhat constant for most of their actions. Spider's worry a lot about balance, and they have even more rigid timing constraints between their leg movements (if they want to move elegantly). These kinds of systems may benefit from more rigidly wired central pattern generator (lacking spines, they still do have cpgs). Evolution could happily hard wire these to a greater extent, thereby making it easier for tiny insect brains to actually do elegant high level control. At a certain point of complexity, like with humans, evolution can't really risk hard-wiring everything, and so the cpg's (and the central nervous system controlling them) may take much longer to be optimized, but as a result are more adaptive (humans can learn and perfect a huge diversity of motor control feats). Four legged animals have a much easier time balancing, and their basic quadrupedal gaits are somewhat common to all four legged critters, meaning it's less risky to give them a more rigid cpg system.
  • Harry Hindu
    5.1k
    You wasted your time. None of this addresses the questions I asked TMF.
  • VagabondSpectre
    1.9k
    You wasted your time. None of this addresses the questions I asked TMF.Harry Hindu

    Oh... I thought you asked if a decerebrated infant can walk before learning to walk. The answer is no, unless little or no learning is required for walking in the first place (this may actually be the case for some insects to get going). I also have explained why; the loose neural circuitry we come hard wired with only solves the problem part way, and conscious learning is very often required to refine a given action or action sequence (refined via the central nervous system responding to intrinsic rewards).

    If you take my answer to your sarcastic question seriously, then it also answers some questions about "intent", which I won't hazard to define. If you're not interested that's alright, but I'm sure many others are, so please forgive my use of your post as a springboard for my own.
  • TheMadFool
    13.8k
    You're avoiding the questions and this post doesn't address any of the points I have made.

    All you have done is provide an explanation as to why the part of our mind that generates intent isn't necessary. Then why does it exist? What is "intent" for? You're the one that proposed this "intent" in our minds and now you're saying it isn't necessary.

    I wonder, if a child lost it's head before learning how to walk, if the child would be able to walk after losing it's head? Why or why not?
    Harry Hindu

    You said
    The reason you say that this "physical" act is done by some other part of the brain is because you've already passed the performance to another part of the brain.Harry Hindu

    which implies that you think learning involves a top-down process where the skill is passed down from the higher consciousness to the subconscious. The headless chicken disproves that claim.

    As for intent in re the brain, think of it as the decision making body - it decides what, as herein relevant, the body will do or learn e.g. I decide to learn to ride a bike. The actual learning to ride a bike is done by another part of the brain, the subconscious.

    Our discussion began when you said I was conflating "performance" with "thinking" but for that to obtain these two must be different from each other which, for me, they are not. Performance, insofar as physical activity is concerned, involves the subconscious and that is thinking albeit we lack awareness of it.
  • Harry Hindu
    5.1k
    If you take my answer to your sarcastic question seriously, then it also answers some questions about "intent", which I won't hazard to define. If you're not interested that's alright, but I'm sure many others are, so please forgive my use of your post as a springboard for my own.VagabondSpectre
    If you didn't define "intent" or show how it has a causal influence on the brain, then no, you didn't come close to addressing my earlier points that both you and TMF have diverted the thread from by bringing up one chicken who could walk after a botched decapitation.
  • Harry Hindu
    5.1k
    You said
    The reason you say that this "physical" act is done by some other part of the brain is because you've already passed the performance to another part of the brain.
    — Harry Hindu

    which implies that you think learning involves a top-down process where the skill is passed down from the higher consciousness to the subconscious. The headless chicken disproves that claim.
    TheMadFool
    Read it again. I'm talking about what YOU said. YOU are the one using terms like "physical", "immaterial", "higher consciousness", "lower consciousness", etc. I'm simply trying to parse your use of these terms and ask you questions about what YOU are trying to ask or posit. I haven't put forth any kind of argument. I am only questioning YOU on what YOU have said.

    As for intent in re the brain, think of it as the decision making body - it decides what, as herein relevant, the body will do or learn e.g. I decide to learn to ride a bike. The actual learning to ride a bike is done by another part of the brain, the subconscious.TheMadFool
    And I asked you how does intent make the body move? How does deciding to learn to ride a bike make the body learn how to ride a bike? Where is "intent" relative to the body it moves? Why is your experience of your intent different than my experience of your intent? How would you and I show evidence that you have this thing that you call, "intent"?
  • TheMadFool
    13.8k
    I'm sure you know what intent is. Did you not have an intent to join this forum or to engage with me in this discussion or to have whatever meal that you ate today, etc.?

    Intent is simply to have a purpose, aim, or goal. To want to ride a bike is to have an intent.
  • VagabondSpectre
    1.9k
    If you didn't define "intent" or show how it has a causal influence on the brain,Harry Hindu

    This is an oxymoronic statement. Intention is generated by brains. Brains cause intention...

    you didn't come close to addressing my earlier points that both you and TMF have diverted the thread from by bringing up one chicken who could walk after a botched decapitation.Harry Hindu

    It's clear you haven't read my posts...
  • VagabondSpectre
    1.9k
    For anyone else who missed it, here's the short version:

    The 'pecking' motion that some birds (but especially baby chickens) do is actually an automatic hard-wired reflex that gets triggered by certain stimulus. However the chick doesn't know what its doing at first, or why; it just thrusts its beak randomly toward the ground. Once it manages to snatch something tasty (like a bug or grain), it can start to optimize the pecking motion to get more "reward" (a hard wired pleasure signal that plays an essential role in the emergence of intelligence and intention).
  • Metaphysician Undercover
    13.1k
    In fact color is an excellent example of our brain doing math because we can discern colors and color is completely determined by a mathematical quantity viz. frequency of EM waves.TheMadFool

    This is wrong, the eyes are what we use to determine colour, not mathematics. And colour is not determined by frequency of EM waves. That's a false myth.
  • TheMadFool
    13.8k
    This is wrong, the eyes are what we use to determine colour, not mathematics. And colour is not determined by frequency of EM waves. That's a false myth.Metaphysician Undercover

    Myth? Color is a frequency-property of light/EM waves. Change the frequency of light and color changes i.e. without changes in frequency there are no changes in color.

    Although frequency may not be the sole determinant of color for there maybe other explanations for the origins of color, existing color theory bases colors on frequency of light.
  • Metaphysician Undercover
    13.1k
    Color is a frequency-property of light/EM waves.TheMadFool

    You clearly do not know what colour is. Do you recognize that what we sense as colour is combinations, mixtures of wavelengths, and that the eyes have three different types of cone sensors, sensitive to different ranges of wavelengths? The fact that human minds judge EM wavelength using mathematics, and we classify the different types of sensors with reference to these mathematical principles, does not mean that the cones use mathematics to distinguish different wavelengths.

    Consider a couple different repetitive patterns, a clock ticking every second, and something ticking every seven seconds. You can notice that the two are different without using math to figure out that one is 1/7 of the other. It's just a matter of noticing that the patterns are different, not a matter of using math to determine the difference between them.

    Noticing a difference does not require mathematics. We notice that it is bright in the day, and dark at night without using math, and we notice that it is warm in the sun, and colder at night without determining what temperature it is.

    Change the frequency of light and color changes i.e. without changes in frequency there are no changes in color.TheMadFool

    Your logic is deeply flawed. Change the amount of salt in your dinner and the taste changes, therefore there are no changes to taste without changing the amount of salt
  • Harry Hindu
    5.1k
    The 'pecking' motion that some birds (but especially baby chickens) do is actually an automatic hard-wired reflex that gets triggered by certain stimulus. However the chick doesn't know what its doing at first, or why; it just thrusts its beak randomly toward the ground.VagabondSpectre
    How do you know what is in a chick's mind? What does it mean for the chick to not know what it is doing at first? It seem to me that the chick is showing intent to feed, or else it wouldn't peck the ground. How do you know that what it does instinctively, is what it intends to do in it's mind? For an instinctive behavior - one in which it is not routed through the filtering of consciousness - what it does is always what it wants to do. It is in consciousness that we re-think our behavior. I'm famished. Should I grab John's sandwich and eat it? For the chick, it doesn't think about whether it ought, or should do something. It just does it and there is no voice in their mind telling what is right or wrong (their conscious). What is "right" is instinctive behavior. Consciousness evolved in highly intelligent and highly social organisms as a means of fine-tuning (instinctive) behavioral responses in social environments.

    Based on your theory, there is no reason to have a reward system, or intent. What would intent be, and what would it be useful for? How did it evolve? If the chick uses stimuli to peck the ground, then why would it need a reward system if natural selection already determined the reward as getting food when the chick pecks the ground, so it passes that behavior down to the next generation. Odds are, when a certain stimuli exists, it pecks the ground. The reward would be sustenance. Why would there need to be a pleasure signals, or intent when all is needed is a specific stimuli to drive the behavior - the stimuli that natural selection "chose" as the best for starting the pecking behavior, because that stimuli and the pecking behavior is what gets food. There would be no need for reward (pleasure signals (what is a pleasure signal relative to a feeling of pleasure), or intent because the stimuli-behavioral response explains the situation without the use of those concepts.

    Once it manages to snatch something tasty (like a bug or grain), it can start to optimize the pecking motion to get more "reward" (a hard wired pleasure signal that plays an essential role in the emergence of intelligence and intention).VagabondSpectre
    What was the stimuli? If the stimuli was a visual or smell of the bug or grain that started the instinctive behavior of pecking, then what purpose if the reward? If it already knows there is a bug or grain via it's senses, and that causes the instinctive behavior, then what is the reward for if they already knew there was a bug or grain on the ground?
  • Harry Hindu
    5.1k
    Myth? Color is a frequency-property of light/EM waves. Change the frequency of light and color changes i.e. without changes in frequency there are no changes in color.

    Although frequency may not be the sole determinant of color for there maybe other explanations for the origins of color, existing color theory bases colors on frequency of light.
    TheMadFool
    Frequency is a property of light. Color is a property of minds. I don't need frequencies of light to strike my retina for me to experience colors. I can think of colors without using my eyes.

    Interesting thought, colors seem to be a fundamental building block of the mind. I know I exist only because I can think, and thinking, perceiving, knowing, imagining, etc. are composed of colors, shapes, sounds, feelings, smells, etc. - sensory data - and nothing seems more fundamental than that.
  • VagabondSpectre
    1.9k
    What does it mean for the chick to not know what it is doing at first?Harry Hindu
    It means that it has no prior experience of the thing it is doing, and also that the proximal cause of the thing is not its high level thoughts. After it gets experience of the thing it is doing, and figures out how to do it on demand, and how to refine the action to actually get food, then we might say "it knows what it is doing".

    It seem to me that the chick is showing intent to feed, or else it wouldn't peck the ground. How do you know that what it does instinctively, is what it intends to do in it's mind?Harry Hindu

    The science of behavior calls them "fixed action responses/patterns". They're still somewhat mysterious (because neural circuitry is complex), but they're extensively studied.

    When a female elk hears the mating call of a male, she automatically beings to ovulate. Does the female deer "intend" to ovulate? Does she "intend" to mate? i know you'll answer 'yes' to that last one, so what about in the case of a recently matured female deer who has never met an adult male, and never mated before. How does that female elk "intend" to do something that she has never experienced, does not understand, and doesn't even know exists?

    The chick has never eaten before. It has no underlying concepts about things. In the same way that a baby doesn't know what a nipple is when it begins the nursing action pattern. We know because there is no such thing as being born with existing experience and knowledge; if we put chicks in environments without grains or bugs to eat, they start pecking things anyway (and hurt themselves).

    Again, it's an action they do automatically, and overtime they actually learn to optimize and utilize it.

    For an instinctive behavior - one in which it is not routed through the filtering of consciousness - what it does is always what it wants to do. It is in consciousness that we re-think our behavior. I'm famished. Should I grab John's sandwich and eat it? For the chick, it doesn't think about whether it ought, or should do something. It just does it and there is no voice in their mind telling what is right or wrong (their conscious). What is "right" is instinctive behavior. Consciousness evolved in highly intelligent and highly social organisms as a means of fine-tuning (instinctive) behavioral responses in social environments.Harry Hindu

    Why are you now making claims about "consciousness"? Are you trying to exclude chickens from the realm of consciousness?

    Are chickens not highly intelligent and social animals?

    Based on your theory, there is no reason to have a reward system, or intent. What would intent be, and what would it be useful for?Harry Hindu

    How come you have to look out for obstacles when you are driving or walking down the street?

    Why do fishermen need to come up with novel long term strategies to catch fish when the weather changes?

    How come we all aren't exactly the same, in a static and unchanging environment that requires no adaptation for survival?

    "Brains" first emerged as a real-time response-tool to help dna reproduce more. DNA is a plan, and it can encode many things, but it has a very hard time changing very quickly; it can only redesign things generation to generation; genetically. Brains on the other hand can react to "real time" events via sensory apparatus and response triggers. If you're a plant that filters nutrients from the surrounding water, maybe you can use a basic hard coded reflex to start flailing your leaves more rapidly when there is lots of nutrients floating by.

    But what if your food is harder to get? What if it moves around very quickly, and you need to actually adjust these actions in real time in order to more reliably get food?

    That's where the crudest form of central decision making comes into play. Evolution cannot hard code a reliable strategy once things start to get too complicated (once the task requires real-time adaptation), so brains step in and do the work. Even in the most primitive animals, there's more going on than hard-wired instinct. There is real time strategy exploration; cognition. the strategies are ultimately boot-strapped by low level rewards, like pain, pleasure, hunger,and other intrinsic signals that give our learning a direction to go in.

    What was the stimuli? If the stimuli was a visual or smell of the bug or grain that started the instinctive behavior of pecking, then what purpose if the reward? If it already knows there is a bug or grain via it's senses, and that causes the instinctive behavior, then what is the reward for if they already knew there was a bug or grain on the ground?Harry Hindu

    It can actually happen without any stimulus (sometimes it's a trigger mechanic with a stimulus threshold, a release mechanic, a gradient response to a stimulus, or even something that happens in the absence of a stimulus). Chicks will start pecking things regardless, but certain body positions or visual patterns might trigger it more often. But that said, randomly pecking is not good in and of itself; the action needs to be actively refined before chicks can do it well.

    The reward is the taste of the bug itself. It gives the bird a reason to keep doing the action, and to do it better.
  • TheMadFool
    13.8k
    Frequency is a property of light. Color is a property of minds. I don't need frequencies of light to strike my retina for me to experience colors. I can think of colors without using my eyes.

    Interesting thought, colors seem to be a fundamental building block of the mind. I know I exist only because I can think, and thinking, perceiving, knowing, imagining, etc. are composed of colors, shapes, sounds, feelings, smells, etc. - sensory data - and nothing seems more fundamental than that.
    Harry Hindu

    :ok: :up:
  • fdrake
    6.6k


    Your posts in this thread have been excellent!

    For the uninitiated, the curse of dimensionality is a problem that occurs when fitting statistical models to data; it occurs (roughly) when there are more relevant factors to the model (model parameters) than there are observations (data) to train the model on.

    Regarding the curse of dimensionality; If you look at the combinatorics of the learning, what would be required to store all the distinctions we need to store as binary strings, there's way more ways of manipulating muscles than there is input data for how to learn how to manipulate them in specified ways without strong prior constraints on the search space (configurations of muscle contractions, say). Neurons are in the same position, there's way more distinctions we recognise and act upon easily than could be embedded into the neurons in binary alone. Another way of saying this consequence is that there isn't (usually) a 'neuron which stores the likeness of your partner's face". What this suggests is that how we learn doesn't suffer from the curse of dimensionality in the way expected from machine learning algorithms as usually thought of.

    There're a few ways that the curse of dimensionality gets curtailed. Two of which are:

    Constraining the search space; which Vagabond covered by talking about central pattern generators (neurological blueprints for actions that can be amended while learning through rewarded improvisation).

    Another way it can be curtailed is by reducing the amount of input information which is processed without losing the discriminatory/predictive power of your processed input information, this occurs through cleverly aggregating it into features. A feature is a salient/highly predictive characteristic derived from input data by aggregating it cleverly. A classical example of this are eigenfaces. which are images that are constructed to correspond to components of variation in images of human faces maximally; the pronounced bits in the picture in the link are the pronounced parts of the face which vary most over people. Analogically, features allow you to compress information without losing relevant information.

    When people look at faces, they typically look at the most informative parts during an act of recognition - eyes, nose, hair, mouth. Another component of learning in a situation where the curse of dimensionality applies is feature learning; getting used to what parts of the environment are the most informative for what you're currently doing.

    Once it manages to snatch something tasty (like a bug or grain), it can start to optimize the pecking motion to get more "reward" (a hard wired pleasure signal that plays an essential role in the emergence of intelligence and intention).VagabondSpectre

    Like with pecking, it's likely to be the case that features that distinguish good to peck targets (like seeds' shapes and sizes or bugs' motion and leg movements) from bad to peck targets become heavily impactful in the learning process, as once an agent has cottoned onto a task relevant feature, they can respond quicker, with less effort, and just as accurately, as features efficiently summarise task relevant environmental differences.

    Edit: in some, extremely abstract, respect, feature learning and central pattern generators address the same problem; Imagine succeeding at a task as a long journey, central pattern generators direct an agent's behavioural development down fruitful paths from the very beginning, they give an initial direction of travel, feature learning lets an agent decide how to best get closer to their destination along the way; to walk the road rather than climb the mountain.
  • VagabondSpectre
    1.9k
    HUZZAH! My kindred!!!

    Everything you said is bang on the nose!

    The dimensionality reduction and integration of our sensory observations is definitely a critical component of prediction (otherwise there is information overload and quick impacts on upward efficiency and scalability).

    My own project began simply as an attempt to make an AI spider (since then it has exploded to encompass all forms of animal life)... As it happened, spiders turned out to be one big walking super-sensitive sensory organ (all the sights, all the sounds, all the vibrations, etc...), which is to say they have incredibly dense and "multi-modal" sensory inputs (they have sight, hearing, vibrational sensing, noses on two or more of their legs, and sensory neurons throughout their body that informs them about things like temperature, injury, body position, and acceleration, and more). And to integrate these many senses, spiders have a relatively puny brain with which they get up to an uncountable number of interesting behaviors with an ultra complicated body (if their flavor of intelligence can be scaled up, it would amount to a powerful general intelligence). It's not just the sensory density and body-complexity, but also the fact that they actually exhibit a very wide range of behaviors. Not only can they extract and learn to encode high level features from individual senses, they can make associations and corroborate those features in and between many different complimentary sensory modalities (if it looks like a fly, sounds like a fly, and feels like a fly: fly 100%).

    I could expound the virtues of spider's all day, but the point that is worth exploring is the fact that spider's have a huge two-ended dimensionality problem (too much in, too much out), and yet their small and primitive-looking nervous systems make magic happen. When I set out to make a spider AI, I didn't have any conception of the dimensionality curse, but I very quickly discovered it... At first, my spider just writhed epileptically on the ground; worse than epileptically (without coherence at all).



    I had taken the time to build a fully fleshed out spider body (a challenge unto itself) with many senses, under the assumption that agents need ample clay to make truly intelligent bricks. And when I set a reinforcement learning algorithm to learn to stand or to walk (another challenge unto itself), it failed endlessly. A month of research into spiders and machine-learning later, I managed to train the spider to ambulate...

    And it wasn't pretty... All it would ever learn is a bunch of idiosyncratic nonsense. Yes, spider, only ever flicking one leg is a possible ambulation strategy, but it's not a good one dammit! Another month of effort, and now I can successfully train the spider to run... Like a horse?



    Imagine my mid-stroke grin; I finally got the spider running (running? I wanted it to be hunting by now!). It took so much fine-tuning to make sure the body and mind/senses has no bugs or inconsistencies, and in the end it still fucking sucked :rofl: ... A spider that gallops like a horse is no spider at all. I almost gave up...

    I decided that there must be more to it... Thinking about spiders and centipedes (and endlessly observing them) convinced me that there has to be some underlying mechanism of leg coordination. After creating a crude oscillator and hooking up the spider muscles with some default settings I got instant interesting results:



    It's still not too pretty, but compared to seizure spider and Hi Ho Silver, this was endlessly satisfying.

    Over the next half a year or so, I have been continuing to research and develop the underlying neural circuitry of locomotion (while developing the asset and accompanying architecture). I branched out to other body-types beyond spiders, and in doing so I forced myself to create a somewhat generalized approach to setting up what amount to *controller systems* for learning agents. I'm still working toward finalizing the spider controller system (spider's are the Ferrari of control systems), but I have already made wildly good achievements with things like centipedes, fish, and even human hands!

    (note: the centipede and the hand have no "brain"; they're headless)





    These fish actually have a central nervous system, whereas the centipede and the hand are "headless" (im essentially sending signals down their control channels manually). They can smell and see the food balls, and they likey!



    The schooling behavior is emergent here. In theory they could get more reward by spreading out, but since they have poor vision and since smell is noisy and sometimes ambiguous, they actually are grouping together because combined they form a much more powerful nose (influencing eachother's direction with small adjustments acts like a voting mechanic for group direction changes). (I can't be sure of this, but im in a fairly good position to make that guess).

    They have two eyeballs (quite low resolution though, 32x32xRGB) each, and these images are passed through a convolutional neural network that performs spatially relevant feature extraction. it basically turns things into an encoded and shortened representation that is then passed as inputs to the PPO RL training network that is acting as the CNS of the fish (the PPO back-propagation passes into the CNN, making it the "head" classifier, if you will).

    This observational density aspect of the system has absorbed almost as much of my focus as the output side (where my cpg system is like a dynamic decoder for high level commands for the CNS, i need a hierarchical system that can act as encoder to give high level and relevant reports to the CNS. Only that way can I lighten the computational load on the CNS to make room for interesting super-behaviors that are composed of more basic things like actually walking elegantly). I have flirted with auto encoding, and some of the people interested in the project are helping me flirt with representation encoding (there's actually a whole zoo of approaches to exhaust).

    The most alluring approach for me is the hierarchical RL approach. Real brains, after-all, are composed of somewhat discrete ganglia (they compartmentalize problem spaces), and they do have some extant hierarchy thanks to evolution (we have lower and higher parts of the brain. Lower tending to be basic and older, evolutionary speaking, with higher areas tending to be more complexly integrated, and more recent acquisitions of nature). Sensory data comes in at the bottom, goes up through the low levels, shit happens at the top and everywhere in-between, shit flows back down (from all levels), and behavior is emerged. One evolutionary caveat of this is that before the higher parts could have evolved, the lower parts had to actually confer some kind of benefit (and be optimized). Each layer needs to both add overall utility, AND not ruin the stability and benefits of the lower ganglia (each layer must graduate the system as a whole). The intuitive take-away from this that we can start with basic low level systems, make them good and stable, and then add layers on-top to achieve the kind of elegant complex intelligence we're truly after. For most roboticists and researchers, the progress-stalling rub has been finding an elegant and generalized low level input and output approach.

    Like with pecking, it's likely to be the case that features that distinguish good to peck targets (like seeds' shapes and sizes or bugs' motion and leg movements) from bad to peck targets become heavily impactful in the learning process, as once an agent has cottoned onto a task relevant feature, they can respond quicker, with less effort, and just as accurately, as features efficiently summarise task relevant environmental differences.

    Edit: in some, extremely abstract, respect, feature learning and central pattern generators address the same problem; Imagine succeeding at a task as a long journey, central pattern generators direct an agent's behavioural development down fruitful paths from the very beginning, they give an initial direction of travel, feature learning lets an agent decide how to best get closer to their destination along the way; to walk the road rather than climb the mountain
    fdrake

    One of the more impressive things i have been able to create is a cpg system that can be very easily and intuitively wired with reflexes and fixed action responses (whether they are completely unconscious like the patellar knee reflex, or whether they are dynamically modulated by the CNS as excitatory or inhibitory condition control). But one of the trickier things (and something that I'm uncertain about) is creating autonomic visual triggers (humans have queer fears like tryptophobia; possibly we're pre-wired for recognizing those patterns). I could actually train a single encoder network to just get really good at recognizing stuff that is relevant to the critter, and pass that learning down through generations, but something tells me this could be a mistake, especially if novel behavior in competition needs to be recognized as a feature).

    I'm still in the thick of things (currently fleshing out metabolic effects like energy expenditure, fatigue, and a crude endocrine system that can either be usefully hard-coded or learned (e.g: stress chemicals that alter the base tension and contraction speed of muscles)).

    In the name of post-sanity I'll end it here, but I've only scratched the surface!
  • Harry Hindu
    5.1k
    The chick has never eaten before. It has no underlying concepts about things. In the same way that a baby doesn't know what a nipple is when it begins the nursing action pattern. We know because there is no such thing as being born with existing experience and knowledge; if we put chicks in environments without grains or bugs to eat, they start pecking things anyway (and hurt themselves).VagabondSpectre

    That depends on what knowledge is. We possess knowledge that we don't know we have. Have you ever forgotten something, only later to be reminded?

    As usual with this topic (mind-matter) we throw about these terms without really understanding what we are saying, or missing in our explanation.

    That's where the crudest form of central decision making comes into play. Evolution cannot hard code a reliable strategy once things start to get too complicated (once the task requires real-time adaptation), so brains step in and do the work. Even in the most primitive animals, there's more going on than hard-wired instinct. There is real time strategy exploration; cognition. the strategies are ultimately boot-strapped by low level rewards, like pain, pleasure, hunger,and other intrinsic signals that give our learning a direction to go in.VagabondSpectre
    Interesting. This part looks like something I have said a number of times before on this forum:

    I am a naturalist because I believe that human beings are the outcomes of natural processes and not separate or special creations. Human beings are as much a part of this world as everything else, and anything that has a causal relationship (like a god creating it) with this world is natural as well. Evolutionary psychology is a relatively new scientific discipline that theorizes that our minds are shaped by natural selection, not just our bodies. This seems like a valid argument to make as learning is essentially natural selection working on shaping minds on very short time scales. You learn by making observations and integrating those observations into a consistent world-view. You change your world-view with new observations. — Harry Hindu

    The brain is a biological organ, like every other organ in our bodies, whose structure and function would be shaped by natural selection. The brain is where the mind is, so to speak, and any change to the brain produces a change in the mind, and any monist would have to agree that if natural selection shapes our bodies, it would therefore shape how our brains/minds interpret sensory information and produce better-informed behavioral responses that would improve survival and finding mates. — Harry Hindu

    Larger brains with higher order thinking evolved to fine-tune it's behavior "on the fly" rather than waiting for natural selection to come up with a solution. You're talking evolutionary psychology here. In essence, natural selection not only filters our genes, but it filters our interpretations of our sensory data (and is this really saying that it is still filtering our genes - epigenetics?). More accurate interpretations of sensory data lead to better survival and more offspring. In essence, natural selection doesn't seem to care about "physical" or "mental" boundaries. It applies to both.

    So then, why are we making this dualistic distinction, and using those terms? Why is it when I look at you, I see a brain, not your experiences. What about direct vs. indirect realism? Is how I see the world how it really is - You are a brain and not a mind with experiences (but then how do I explain the existence of my mind?), or is it the case that the brain I see is merely a mentally objectified model of your experiences, and your experiences are real and brains are merely mental models of what is "out there", kind of like how bent sticks in water are really mental representations of bent light, not bent straws.
  • VagabondSpectre
    1.9k
    That depends on what knowledge is. We possess knowledge that we don't know we have. Have you ever forgotten something, only later to be reminded?

    As usual with this topic (mind-matter) we throw about these terms without really understanding what we are saying, or missing in our explanation.
    Harry Hindu

    Granted the high level stuff is still up for interpretation as to how it works, but what I have laid out is the fundamental ground work upon which basic learning occurs. The specific neural circuitry that causes fixed action responses are known to reside in the spine, and we even seem them driving "fictive actions" in utero (before they are even born) that conform to standard gaits.

    Forgetting and remembering is a function of memory, and how memory operates and meshes with the rest of our learning and intelligent systems + body is complicated and poorly understood. But unless you believe that infants are born with per-existing ideas and knowledge that they can forget and remember, we can very safely say that people are not born with preexisting ideas and beliefs (we may not be full blown tabula rasi, but we aren't fully formed rembrants either); only the lowest level functions can be loosely hard-coded (like the default gait, or the coupling of eye muscles, or the good/bad taste of nutritious/poisonous substances).

    Larger brains with higher order thinking evolved to fine-tune it's behavior "on the fly" rather than waiting for natural selection to come up with a solution. You're talking evolutionary psychology here. In essence, natural selection not only filters our genes, but it filters our interpretations of our sensory data (and is this really saying that it is still filtering our genes - epigenetics?). More accurate interpretations of sensory data lead to better survival and more offspring. In essence, natural selection doesn't seem to care about "physical" or "mental" boundaries. It applies to both.Harry Hindu
    Evolutionary endowed predispositions have these complex effects because they bleed into and up through the complex system we inhabit as organisms (e.g: environment affects hormones, hormones affect genes, genes create different hormone, different hormone acts as neurotransmitter, non-linear effects emerge in the products of affected neural networks), but they are also constrained by instability. When you change low level functionality in tiered complex systems you run the risk of having catastrophic feedback domino effects that destabilize the entire system.

    So then, why are we making this dualistic distinction, and using those terms? Why is it when I look at you, I see a brain, not your experiences.Harry Hindu
    Because we have to distinguish between the underlying structure and the emergent product. Appealing to certain concepts without giving a sound basis for their mechanical function is where the random speculation comes in to play. I minimize my own speculation by focusing on the low level structures and learning methodology that approximates more primitive intelligent systems. Ancient arthropods that learned to solve problems like swimming or catching fish (*catching a ball*) did so through very primitive and generic central pattern generator circuits and a low level central decision makers to orchestrate them. In term of what we can know through evidence and modeling, this is an accepted fact of neurobiology. I try to refrain from making hard statements about how high level stuff actually works, because as yet there are too many options and open questions in both the worlds of machine learning and neuroscience.

    Also, you don't see my brain; you don't even see my experiences; you experience my actions as they express, which emerge from my experiences, as orchestrated by my brain, within the dynamics and constraints of the external world, and then re-filtered back up through your own sensory apparatus.

    What about direct vs. indirect realism? Is how I see the world how it really is - You are a brain and not a mind with experiences (but then how do I explain the existence of my mind?), or is it the case that the brain I see is merely a mentally objectified model of your experiences, and your experiences are real and brains are merely mental models of what is "out there", kind of like how bent sticks in water are really mental representations of bent light, not bent straws.Harry Hindu

    We cannot address the hard problem of consciousness, so why try? We're at worst self-deluded into thinking we have free will, and we bumble about a physically consistent (enough) world, perceiving it through secondary apparatus which turn measurements into signals, from which models and features are derived, and used to anticipate future measurements in ways that are beneficial to the objectives that drive the learning. Objectives that drive learning are where things start to become hokey, but we can at least make crude assertions like: "pain sensing neurons" (measuring devices that check for physical stress and temperature) are a part of our low level reward system that gives our learning neural networks direction (e.g: learn to walk without hurting yourself).

    There are a few obvious implications that come from understanding the 'low level" workings of biological intelligence (and how it expresses through various systems). I would say that i have addressed and answered the main subject of the thread, and beyond. A homunculus can learn to catch a ball if it is wired correctly with a sufficiently complex neural network, sufficient and quick enough senses, and the correct reward signal (and of course the body must be capable of doing so).
  • Harry Hindu
    5.1k
    Granted the high level stuff is still up for interpretation as to how it works, but what I have laid out is the fundamental ground work upon which basic learning occurs. The specific neural circuitry that causes fixed action responses are known to reside in the spine, and we even seem them driving "fictive actions" in utero (before they are even born) that conform to standard gaits.

    Forgetting and remembering is a function of memory, and how memory operates and meshes with the rest of our learning and intelligent systems + body is complicated and poorly understood. But unless you believe that infants are born with per-existing ideas and knowledge that they can forget and remember, we can very safely say that people are not born with preexisting ideas and beliefs (we may not be full blown tabula rasi, but we aren't fully formed rembrants either); only the lowest level functions can be loosely hard-coded (like the default gait, or the coupling of eye muscles, or the good/bad taste of nutritious/poisonous substances).
    VagabondSpectre
    First, you talk about learning, then the next sentence talks about "fixed action responses". I don't see what one has to do with the other unless you are saying that they are the same thing or related. Does one learn "fixed actions", or does one learn novel actions? One might say that instincts, or "fixed action responses" are learned by a species per natural selection, rather than an organism. Any particular "fixed action response" seems like something that can't be changed, yet humans (at least) can cancel those "fixed" actions when they are routed through the "high level stuff". We can prevent our selves from acting on our instincts, so for humans, they aren't so "fixed". They are malleable. Explaining how "fixed action responses" evolved in a species is no different than explaining how an organism evolved (learned) within it's own lifetime. We're simply talking about different lengths, layers or levels in space-time this evolution occurs.

    Some on this forum talk about knowing how vs knowing that. If a chicken can walk after a botched decapitation, then it walking entails "knowing" how to walk, or are those people using the term, "know" too loosely? Do "fixed action responses" qualify a "knowing how"? What about the level of the species? Can you say that a species "knows how" to walk, or do only organisms "know how" to walk? Also, knowing how to walk and actual walking didn't exist at the moment of the Big Bang. So how did walking and knowing how to walk (are they really separate things?) come about if not by a similar process to what you call "learning" - by trying different responses to the same stimuli to see what works best - kind of like how natural selection had to "try" different strategies in solving the same problem (locomotion) before it "found" what worked and is now a defining feature of a species (walking on two legs as opposed to four)? It is interesting that we can use these terms, "try", "found" etc. when it comes to the process of natural selection and how computers work. It reminds me of what Steven Pinker wrote in his book, "How the Mind Works".

    And then along came computers: fairy-free, fully exorcised hunks of metal that could not be explained without the full lexicon of mentalistic taboo words. "Why isn't my computer printing?" "Because the program doesn't know you replaced your dot-matrix printer with a laser printer. It still thinks it is talking to the dot-matrix and is trying to print the document by asking the printer to acknowledge its message. But the printer doesn't understand the message; it's ignoring it because it expects its input to begin with '%!' The program refuses to give up control while it polls the printer, so you have to get the attention of the monitor so that it can wrest control back from the program. Once the program learns what printer is connected to it, they can communicate." The more complex the system and the more expert the users, the more their technical conversation sounds like the plot of a soap opera.

    Behaviorist philosophers would insist that this is all just loose talk. The machines aren't really understanding or trying anything, they would say; the observers are just being careless in their choice of words and are in danger of being seduced into grave conceptual errors. Now, what is wrong with this picture? The philosophers are accusing the computer scientists of fuzzy thinking? A computer is the most legalistic, persnickety, hard-nosed, unforgiving demander of precision and explicitness in the universe. From the accusation you'd think it was the befuddled computer scientists who call a philosopher when their computer stops working rather than the other way around. A better explanation is that computation has finally demystified mentalistic terms. Beliefs are inscriptions in memory, desires are goal inscriptions, thinking is computation, perceptions are inscriptions triggered by sensors, trying is executing operations triggered by a goal.
    — Steven Pinker

    Because we have to distinguish between the underlying structure and the emergent product. Appealing to certain concepts without giving a sound basis for their mechanical function is where the random speculation comes in to play. I minimize my own speculation by focusing on the low level structures and learning methodology that approximates more primitive intelligent systems. Ancient arthropods that learned to solve problems like swimming or catching fish (*catching a ball*) did so through very primitive and generic central pattern generator circuits and a low level central decision makers to orchestrate them. In term of what we can know through evidence and modeling, this is an accepted fact of neurobiology. I try to refrain from making hard statements about how high level stuff actually works, because as yet there are too many options and open questions in both the worlds of machine learning and neuroscience.VagabondSpectre
    What do you mean by "emergent product", or more specifically, "emergent"? How does the mind - something that is often described as "immaterial", or "non-physical" - "emerge" from something that isn't often described as "immaterial", or "non-physical", but is often described as the opposite - "material", or "physical"? Are you talking about causation or representation? Or are you simply talking about different views of the same thing? Bodies "emerge" from interacting organs. Galaxies "emerge" from interacting stars and hydrogen gas, but here we are talking about different perspective of the same "physical" things. The "emergence" is a product of our different perspectives of the same thing. Do galaxies still "emerge" from stellar interactions even when there are no observers from a particular vantage point? There seems to be a stark difference between explaining "emergence" as a causal process or as different views of the same thing. The former requires one to explain how different things "physical" can cause "non-physical" things. The latter requires one to explain how different perspectives can lead one to see the same thing differently, which as to do with the relationship between an observer and what is being observed (inside the The Milky Way galaxy as opposed to being outside of it, or inside your mind as opposed to outside of it). So which is it, or is it something else?

    Also, you don't see my brain; you don't even see my experiences; you experience my actions as they express, which emerge from my experiences, as orchestrated by my brain, within the dynamics and constraints of the external world, and then re-filtered back up through your own sensory apparatus.VagabondSpectre
    Well, I was talking about if I cut open your skull, I can see your brain, not your experiences. Even if I cut open your brain, I still would't see something akin to your experiences. I've seen brain surgeons manipulate a patient's speech when touching certain areas of the brain. What is the experience like? Is it that you know what say, but your mouth isn't working (knowing what to say seems to be different than actually saying it, or knowing how to say it (think of Steven Hawking), or is that your entire mind is befuddled and you don't know what to say when the brain surgeon touches that area of your brain with an electric probe? How does a electric probe touching a certain area of the brain ("physical" interaction) allow the emergence of confusion ("non-physical") in the mind?

    We cannot address the hard problem of consciousness, so why try? We're at worst self-deluded into thinking we have free will, and we bumble about a physically consistent (enough) world, perceiving it through secondary apparatus which turn measurements into signals, from which models and features are derived, and used to anticipate future measurements in ways that are beneficial to the objectives that drive the learning. Objectives that drive learning are where things start to become hokey, but we can at least make crude assertions like: "pain sensing neurons" (measuring devices that check for physical stress and temperature) are a part of our low level reward system that gives our learning neural networks direction (e.g: learn to walk without hurting yourself).VagabondSpectre
    Solutions to hard problems often come in looking at the same thing differently. The hard problem is a product of dualism. Maybe if we abandoned dualism in favor of a some flavor of monism, then the hard problem goes away. But then what is to say that the mind is of the same stuff as the brain, so the why do they appear so differently? Is it because we are simply taking on different perspectives of the same thing, like I said before? Is it the case that:
    Then the mind "thinking" how to catch a ball is the same as the brain "performing" mathematical calculations?Harry Hindu

    There are a few obvious implications that come from understanding the 'low level" workings of biological intelligence (and how it expresses through various systems). I would say that i have addressed and answered the main subject of the thread, and beyond. A homunculus can learn to catch a ball if it is wired correctly with a sufficiently complex neural network, sufficient and quick enough senses, and the correct reward signal (and of course the body must be capable of doing so).VagabondSpectre
    Well, that's a first that a philosophical question has actually been answered. Maybe you should get a nobel prize and this thread should be a sticky. The fact that you used a scientific explanation to answer a philosophical question certainly makes me give you the benefit of the doubt, though. :wink:
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.