• MoK
    1.3k
    When can indeed perceive a set of distinct objects as falling under the concept of a number without there being the need to engage in a sequential counting procedure. Direct pattern recognition plays a role in our recognising pairs, trios, quadruples, quintuples of objects, etc., just like we recognise numbers of dots on the faces of a die without counting them each time. We perceive them as distinctive Gestalten.Pierre-Normand
    Correct. There is however a limit on the number of things that we can realize without counting. I think it is related to working memory and it is at most five to six items.

    But I'm more interested in the connection that you are making between recognising objects that are actually present visually to us and the prima facie unrelated topic of facing open (not yet actual) alternatives for future actions in a deterministic world.Pierre-Normand
    I was interested in a neural network that can realize the number of objects. I found this thesis which exactly deals with the problem of realizing the number of objects that I was interested to. The author does not explain what exactly happens at the neural level when the neural network is presented with many objects and it can realize the number of objects as I think it is a complex phenomenon. I think we are dealing with the same phenomenon when we face two options in the example of the maze, left and right path. So, although the neural processes whether in our brain or an artificial neural network are deterministic they can lead to the realization of options. By options, I mean things that are real and accessible to us and we can choose one or more of them depending on the situation.
  • MoK
    1.3k
    I would think there's a limit to this. We might recognize the number of dots on a die because of the specific arrangements that we've seen so many times. Would we do as well with five or six randomly arranged objects? Or ten or fifteen?Patterner
    I think it depends on the working memory of the person which is at most 5 to 6 items.
  • Patterner
    1.2k

    Strikes that. What do you mean by working memory? I'm thinking someone could glance at, say, a max of 10 randomly arranged items, and immediately know there are 10, without counting. Someone else might only be able to do that with up to 5 items.
  • MoK
    1.3k

    Working memory is the memory of the conscious mind which is temporary.
  • Patterner
    1.2k
    Working memory is the memory of the conscious mind which is temporary.MoK
    Right. I'm thinking this specific thing is less about working memory than what the ability to recognize numbers of randomly arranged objects is called. No?
  • MoK
    1.3k
    Right. I'm thinking this specific thing is less about working memory than what the ability to recognize numbers of randomly arranged objects is called. No?Patterner
    I think it is related. You can realize a few objects in your visual field immediately without counting. These objects are registered in your working memory. If the number of objects surpasses your the size of working memory then you cannot immediately report the number of objects and you have to count them. You might find this study interesting.
  • noAxioms
    1.6k
    How about wording it this way:
    A Roomba wouldn't work if it didn't realize it has options.
    Patterner
    I'm fine with that.

    You didn't answer the question asked "What fundamentally do you do that a Roomba doesn't?" when you imply that a Roomba doesn't realize options.


    We were considering a fork in the path of a maze. Are they not a pair of options? — noAxioms

    Sure they are.

    Sure, one cannot choose to first go down both. Of the options, only one can be chosen, and once done, choosing otherwise cannot be done without some sort of retrocausality. They show this in time travel fictions where you go back to correct some choice that had unforeseen bad consequences. — noAxioms

    The point is that both paths are real and accessible, as we can recognize them. However, the process of recognizing paths is deterministic. This is something that hard determinists deny.
    MoK
    How can a determinist deny that some physical process is determisitic? You have a reference for this denial by 'hard determinists'?
    I mean, even in a non-deterministic universe, the process of recognizing paths (biological or machine) is deterministic. I cannot think of a non-determinstic way to to implement it.

    I don't think that the decision results from the brain's neural process. The decision is due to the mind.
    Ah, so you think that this 'mind' is separate from neural processes. You should probably state assumptions of magic up front, especially when discussing how neural processes do something that you deny are done by the neural processes. Or maybe the brain actually has a function after all besides just keeping the heart beating and such.

    since any deterministic system halts when you present it with options.
    Tell that to Roomba or the maze runner, neither of which halts at all.

    A deterministic system always goes from one state to another unique state. If a deterministic system reaches a situation where there are two states available for it it cannot choose between two states therefore it halts.
    No, it makes a choice between them. Determinism helps with that, not hinders it. Choosing to halt is a decision as well, but rarely made. You make a lot of strawman assumptions about deterministic systems, don't you?


    In the example of the maze, the options are presented to the person's visual fields. In the case of rubbery the options are mental objects.
    The maze options are also 'mental' objects, where 'mental; is defined as the state of the information processing portion of the system. A difference in how the choice comes to be known is not a fundamental difference to the choice existing.
  • Patterner
    1.2k
    How about wording it this way:
    A Roomba wouldn't work if it didn't realize it has options.
    — Patterner
    I'm fine with that.

    You didn't answer the question asked "What fundamentally do you do that a Roomba doesn't?" when you imply that a Roomba doesn't realize options.
    noAxioms
    I did not. I was waiting to see if we were thinking of things the same way.

    The difference is I am aware that I have options. The Roomba goes one way or the other at the command of it's programming, never aware of how the decision was made; that a decision was made; or even that there are options. It has no concept of options. It does not think about the choice it made two minutes ago, and wonder it if might have been better to have gone the other way. And it certainly doesn't regret any choice it ever made.
  • MoK
    1.3k
    How can a determinist deny that some physical process is determisitic? You have a reference for this denial by 'hard determinists'?noAxioms
    I wanted to say that determinists deny the existence of options rather than determinism.

    Ah, so you think that this 'mind' is separate from neural processes.noAxioms
    Sure, I think that the mind is separate from neural processes. To me, physical processes in general are not possible without an entity that I call the Mind. I have two threads on this topic. In one of the threads entitled "Physical cannot be the cause of its own change" I provide two main arguments against the physicalist worldview. In another thread entitled "The Mind is the Uncaused Cause", I discuss the nature of causality as vertical rather than horizontal. So no Mind, no physical processes, no neural processes.

    You should probably state assumptions of magic up front, especially when discussing how neural processes do something that you deny are done by the neural processes.noAxioms
    I am not denying the role of neural processes at all. It is due to neural processes that we can experience things all the time. The existence of options also is due to the existence of neural processes. The neural processes however cannot lead to direct experience through so-called the Hard Problem of Consciousness. So, to have a coherent view we need to include the mind as an entity that experiences. The Mind experiences and causes/creates physical whereas the mind, such as the conscious mind, experiences ideas, ideas such as the simulation of reality, generated by the subconscious mind. The conscious mind only intervenes when it is necessary, for example when there is a conflict of interests in a situation.

    Or maybe the brain actually has a function after all besides just keeping the heart beating and such.noAxioms
    Sure. No brain, no neural processes, no experience in general, whether the experience is a feeling, the simulation of reality, thoughts, etc.

    Tell that to Roomba or the maze runner, neither of which halts at all.noAxioms
    That is because Roomba acts based on the instruction that a human wrote it. We don't act based on a preprogrammed instruction. We are constantly faced with options, these options have different features that we have never experienced before. We normally go through a very complex process of giving weights to options. Once the process of giving weights to options is performed we are faced with two situations, either options do not have the same weight or they have the same weight. We normally choose the option that has higher weight most of the time but we can always choose otherwise. When the option has the same weight we can still decide freely choose the option we please. In both cases, that is the conscious mind that makes the final decision freely by choosing one of the options.

    No, it makes a choice between them. Determinism helps with that, not hinders it. Choosing to halt is a decision as well, but rarely made. You make a lot of strawman assumptions about deterministic systems, don't you?noAxioms
    Not at all. Please see above.

    The maze options are also 'mental' objects, where 'mental; is defined as the state of the information processing portion of the system. A difference in how the choice comes to be known is not a fundamental difference to the choice existing.noAxioms
    The maze options become mental objects if you think about them otherwise they are just something in your visual field.
  • noAxioms
    1.6k
    The difference is I am aware that I have options.Patterner
    Both are. The Roomba would not be able to choose an option of which it was unaware. So maybe the left path has been visited less recently, but if it didn't know left was an option, it would just go to the one path it does know about and clean the same spot over and over. Not very good programming.

    The Roomba goes one way or the other at the command of it's programming
    The programming is part of the Roomba, same as your programming is part of you (maybe, opinions differ on the latter. You make it sound like a program at the factory is somehow remote controlling the device. It could work that way, but it doesn;t.

    never aware of how the decision was made
    Also true of both.

    It has no concept of options.Patterner
    As I said, the device couldn't operate if it wasn't aware of options. It has sensory inputs. It uses them to determine options, including the option to seek the charging station, just like you do.

    It does not think about the choice it made two minutes ago
    Actually it does, but I do agree that some devices don't retain memory of past choices. How is that a fundamental difference? You also don't remember all choices made in the past, even 2 minutes old. The Roomba doesn't so much remember the specific choices (which come at the rate of several per second, possibly thousands), but rather remembers the consequences of them.

    And it certainly doesn't regret any choice it ever made.Patterner
    Got me there. The human emotion of regret probably does not enhance its functionality, so they didn't include that. The recent chess playing machines do definitely have regret (its own kind, not the human kind), something necessary for learning, but Roombas are not learning things.



    I wanted to say that determinists deny the existence of options rather than determinism.MoK
    If they do that, they're using a very different definition of 'options' than are you.

    Your definition (OM): the available paths up for choice. There are usually hundreds of options, but in a simplified model, you come to a T intersection in a maze. [Left, right, back the way you came, just sit there, pause and make a mark] summarize most of the main categories. Going straight is not an option because there's a wall there.
    I am putting words in your mouth, so if I'm wrong, then call it ON (Option definition, Noaxioms) and then give your own definition with clear examples of what is and is not an option.

    OK, said hard determinist with the alternate definition OD: The possible subsequent states that lead from a given initial state. If determinism is true, there is indeed only one of those, both for the Roomba and for you. There is no distinction.

    Thing is, there is no empirical way to figure out if determinism is the case or not. The experience is the same. If you want to go left, you go left. If you want to go right, you go right. That's true, determinism or not, and it's true regardless of which definition of 'options' is used.

    Side note: Using OD:, there is one option only with types 2, 5, and 6, but 1,4 and 6 are not especially considered 'hard determinism'.


    Sure, I think that the mind is separate from neural processes.MoK
    OK. Then it's going to at some point need to make a physical effect from it's choice. If you choose to punch your wife in the face, your choice needs to at some point cause your arm to move, something that cannot happen if the subsequent state is solely a function of the prior physical state. So your view is compatible only with type 6 determinism, and then only in a self-contradictory way, but self contradiction is what 6 is all about.

    To me, physical processes in general are not possible without an entity that I call the Mind.
    Fine. Work out the problem I identified just above. If you can't do that, then you haven't thought things through. Do you deny known natural law? If not, your beliefs fail right out of the gate. If you do deny it, where specifically is it violated?

    How is the Roomba mind fundamentally different than yours? It's a physical process, and you assert above that such process is not possible without a mind. A rock cannot fall without a mind.

    I suppose that works under idealism, but determinism (or lack of it) has pretty much no meaning under idealism.
  • Patterner
    1.2k

    So Roombas are the mental equals of humans? The only thing separating us is emotion?
  • MoK
    1.3k
    Your definition (OM): the available paths up for choice. There are usually hundreds of options, but in a simplified model, you come to a T intersection in a maze. [Left, right, back the way you came, just sit there, pause and make a mark] summarize most of the main categories. Going straight is not an option because there's a wall there.noAxioms
    By options, I mean things that are real and accessible to us and we can choose one or more of them depending on the situation.

    OK, said hard determinist with the alternate definition OD: The possible subsequent states that lead from a given initial state. If determinism is true, there is indeed only one of those, both for the Roomba and for you. There is no distinction.noAxioms
    Sure, I disagree. This thread's whole purpose is to understand how options can exist and be real for entities such as humans with brains. I was just looking to understand how we could realize options as a result of neural processes in the brain. I did an extensive search on the internet and found many methods for object recognition. I also found a thesis that deals with a neural network that can realize the number of objects presented to it. So the existence of options is well established even in the domain of artificial neural networks.

    OK. Then it's going to at some point need to make a physical effect from it's choice. If you choose to punch your wife in the face, your choice needs to at some point cause your arm to move, something that cannot happen if the subsequent state is solely a function of the prior physical state.noAxioms
    The mind can only intervene when options are available to it. Once the decision is made, it becomes an observer and follows the chain of causality until the next point where options become available again.

    Fine. Work out the problem I identified just above. If you can't do that, then you haven't thought things through. Do you deny known natural law?noAxioms
    Sure, I agree with the existence of physical laws.

    If not, your beliefs fail right out of the gate.noAxioms
    Not at all. The Mind is in constant charge of keeping things in motion, in this motion, the intrinsic properties of particles are preserved for example. The physical laws are manifestations of particles having certain intrinsic properties.
  • noAxioms
    1.6k
    So Roombas are the mental equals of humans? The only thing separating us is emotion?Patterner
    Ask MoK. He's the one that said that "hysical processes in general are not possible without an entity that I call the Mind", which implies that a Roomba is not possible without a mind. It's apparently how he explains the action resulting from an immaterial decision.


    By options, I mean things that are real and accessible to us and we can choose one or more of them depending on the situation.MoK
    I think that pretty much matches the wording I gave. It works great for the Roomba too.
  • MoK
    1.3k
    I think that pretty much matches the wording I gave. It works great for the Roomba too.noAxioms
    The difference between a human and a Roomba is that a human has a conscious mind that makes the decisions whereas, in the case of a Roomba, all decisions related to different situations are preprogrammed.
  • javra
    2.8k
    Neural processes however are deterministic. So I am wondering how can deterministic processes lead to the realization of options.MoK

    I deem this the crucial premise in the OP that needs to be questioned.

    IFF a world of causal determinism, then sure: “neural processes are deterministic” (just as much as a Roomba). However, if the world is not one of causal determinism, then on what grounds, rational or empirical, can this affirmation be concluded?

    A living brain is after all living, itself composed of individual, interacting living cells, of which neurons are likely best known via empirical studies. As individual living cells, neurons too can be deemed to hold some sort of sentience – this in parallel to that sentience (else mind) that can be affirmed of single-celled eukaryotic organisms, such as ameba. Other that personal biases, there's no rational grounds to deny sentience (mind) to one and not the other. And, outside a stringent conviction in our world being one of causal determinism, there is no reason to conclude that an ameba, for example, behaves in fully deterministic manners. Likewise then applies to the behaviors of any individual neuron. Each neuron seeks both sustenance and stimulation via its synaptic connections so as to optimally live. It’s by now overwhelmingly evidenced that neuroplasticity in fact occurs. Such that it is more than plausible that both synaptic reinforcement and synaptic decay (as well as the creation of new synaptic connections) will occur based on the (granted, very minimal) volition of individual neurons’ attempts to best garner sustenance and stimulations so as to optimize its own individual life as a living cell.

    And all this can well be in tune with the stance that neural processes are in fact not deterministic (here, this in the sense of a causal determinism).

    To this effect, linked here is an article regarding the empirically evidenced intelligence, or else sentience, of individual cohorts of neurons grown in a petri dish which learned how to play Pong (which can be argued to require a good deal of forethought (prediction) to successfully play). Some highlights from the article:

    Summary: Brain cells grown in a petri dish can perform goal-directed tasks, such as learning to play a game of Pong.

    [....]

    “But in truth we don’t really understand how the brain works.”

    By building a living model brain from basic structures in this way, scientists will be able to experiment using real brain function rather than flawed analogous models like a computer.

    [...]

    To perform the experiment, the research team took mouse cells from embryonic brains as well as some human brain cells derived from stem cells and grew them on top of microelectrode arrays that could both stimulate them and read their activity.

    Electrodes on the left or right of one array were fired to tell Dishbrain which side the ball was on, while distance from the paddle was indicated by the frequency of signals. Feedback from the electrodes taught DishBrain how to return the ball, by making the cells act as if they themselves were the paddle.

    [...]

    Kagan says one exciting finding was that DishBrain did not behave like silicon-based systems. “When we presented structured information to disembodied neurons, we saw they changed their activity in a way that is very consistent with them actually behaving as a dynamic system,” he says.

    “For example, the neurons’ ability to change and adapt their activity as a result of experience increases over time, consistent with what we see with the cells’ learning rate.”
    https://neurosciencenews.com/organoid-pong-21625/

    Again, if one insists in the world being one of causal determinism, then all this is itself determinate in all respects. Fine. But if not, empirical studies such as this strongly indicate that neural processes are indeed indeterministic, aka, not deterministic.

    The inquiry into options available and the act of choice making itself would then follow suit.
  • MoK
    1.3k
    I deem this the crucial premise in the OP that needs to be questioned.

    IFF a world of causal determinism, then sure: “neural processes are deterministic” (just as much as a Roomba). However, if the world is not one of causal determinism, then on what grounds, rational or empirical, can this affirmation be concluded?
    javra
    In this thread, I really didn't want to get into a debate about whether the world at the microscopic level is deterministic or not. There is one interpretation of quantum mechanics, namely the De Broglie–Bohm interpretation that is paradox-free and it is deterministic. Accepting this interpretation then it follows that a neuron also is a deterministic entity. What happens when we have a set of neurons may be different though. Could a set of neurons work together in such a way that the result of this collaboration results in the existence of options? We know by fact that this is the case in the human brain. But what about when we have a few or some neurons? To answer that, let's put the real world aside and look at artificial neural networks (ANN) for a moment. Could the ANN realize and count different objects? It seems that is the case. So options are also realizable even to the ANN while the neurons in such a system function in a purely deterministic way.

    A living brain is after all living, itself composed of individual, interacting living cells, of which neurons are likely best known via empirical studies. As individual living cells, neurons too can be deemed to hold some sort of sentience – this in parallel to that sentience (else mind) that can be affirmed of single-celled eukaryotic organisms, such as ameba.javra
    An ameba is a living organism and can function on its own. A neuron, although is a living entity, its function depends on the function of other neurons. For example, the strengthening and weakening of a synapse is the result of whether the neurons that are connected by the synapse fire in synchrony or not, so-called Hebbian theory. So there is a mechanism for the behavior of a few neurons, and it seems that is the basic principle for memory, and I would say for other complex phenomena even such as thinking.

    Other that personal biases, there's no rational grounds to deny sentience (mind) to one and not the other. And, outside a stringent conviction in our world being one of causal determinism, there is no reason to conclude that an ameba, for example, behaves in fully deterministic manners. Likewise then applies to the behaviors of any individual neuron. Each neuron seeks both sustenance and stimulation via its synaptic connections so as to optimally live.javra
    I would say that an ameba has a mind, can learn, etc. but I highly doubt that a single neuron has a mind and can freely decide as it seems that the functioning of a neuron is not independent of other neurons. Please see the previous comment.

    It’s by now overwhelmingly evidenced that neuroplasticity in fact occurs. Such that it is more than plausible that both synaptic reinforcement and synaptic decay (as well as the creation of new synaptic connections) will occur based on the (granted, very minimal) volition of individual neurons’ attempts to best garner sustenance and stimulations so as to optimize its own individual life as a living cell.javra
    Neuroplasticity, to the best of our knowledge, is the result of neurons firing together. Please see my comment on the Hebbian theory.

    To this effect, linked here is an article regarding the empirically evidenced intelligence, or else sentience, of individual cohorts of neurons grown in a petri dish which learned how to play Pong (which can be argued to require a good deal of forethought (prediction) to successfully play).javra
    That was an interesting article to read. But there are almost 800,000 cells in the DishBrain. I don't understand the relevance of this study to the behavior of one neuron and whether a neuron is not a deterministic entity.
  • javra
    2.8k
    In this thread, I really didn't want to get into a debate about whether the world at the microscopic level is deterministic or not.MoK

    My bad then.

    To answer that, let's put the real world aside and look at artificial neural networks (ANN) for a moment.MoK

    In other words, look at silicon-based systems rather than life-based systems in order to grasp how life-based systems operate. Not something I'm myself into. But it is your OP, after all.

    As individual living cells, neurons too can be deemed to hold some sort of sentience – this in parallel to that sentience (else mind) that can be affirmed of single-celled eukaryotic organisms, such as ameba. — javra

    An ameba is a living organism and can function on its own. A neuron, although is a living entity, its function depends on the function of other neurons. For example, the strengthening and weakening of a synapse is the result of whether the neurons that are connected by the synapse fire in synchrony or not, so-called Hebbian theory. So there is a mechanism for the behavior of a few neurons, and it seems that is the basic principle for memory, and I would say for other complex phenomena even such as thinking.
    MoK

    I'll only point out that all of your reply addresses synapses - which are connections in-between neurons and not the neutrons themselves.

    So none of this either rationally or empirically evidences that an individual neuron is not of itself a sentience-endowed lifeform - one that engages in autopoiesis, to include homeostasis and metabolism as an individual lifeform, just as much as an any self-sustaining organism does; one that seeks out stimulation via both dendritic and axonal growth just as much as any self-sustaining organism seeks out and requires stimulation; one which perceives stimuli via its dendrites and acts, else reacts, via its axon; etc.

    As I was previously mentioning, there is no rational or empirical grounds to deny sentience to the individual neuron (or most any somatic cell for that matter - with nucleus-lacking red blood cells as a likely exception) when ascribing sentience to self-sustaining single celled organisms such as amebas. Again, the explanation you've provided for neurons not being in some manner sentient falls short in part for the reasons just mentioned: in short, synapses are not neurons, but the means via which neurons communicate.

    But back to the premise of neural processes being deterministic ...
  • MoK
    1.3k
    My bad then.javra
    I am sorry. But I elaborate a little on quantum mechanics in my reply to your post. I hoped that that was enough.

    In other words, look at silicon-based systems rather than life-based systems in order to grasp how life-based systems operate. Not something I'm myself into. But it is your OP, after all.javra
    As I mentioned, I was interested in understanding whether a few or some neurons work together such that the system can realize the options. I think it would be extremely difficult to make such a setup by living neurons. That was why I suggested to focus on the artificial neural network.

    I'll only point out that all of your reply addresses synapses - which are connections in-between neurons and not the neutrons themselves.javra
    That is a very important part when it comes to the neuroplasticity of the brain. A neuron mainly just fires when it is depolarized to a certain extent.

    So none of this either rationally or empirically evidences that an individual neuron is not of itself a sentience-endowed lifeform - one that engages in autopoiesis, to include homeostasis and metabolism as an individual lifeform, just as much as an any self-sustaining organism does; one that seeks out stimulation via both dendritic and axonial growth just as much as any self-sustaining organism seeks out and requires stimulation; one which perceives stimuli via its dendrites and acts, else reacts, via its axon; etc.

    As I was previously mentioning, there is no rational or empirical grounds to deny sentience to the individual neuron (or most any somatic cell for that matter - with nucleus-lacking red blood cells as a likely exception) when ascribing sentience to self-sustaining single celled organisms such as ameba. Again, the explanation you've provided for neurons not being in some manner sentient falls short in part for the reasons just mentioned: in short, synapses are not neurons, but the means via which neurons communicate.
    javra
    I highly doubt that a neuron has a mind. But let's assume so for the sake of the argument. In which location in a neuron is the information related to what the neuron experienced in the past stored? How could a neuron realize options? How could a group of neurons work coherently if each is free?
  • javra
    2.8k
    That is a very important part when it comes to the neuroplasticity of the brain. A neuron mainly just fires when it becomes depolarized to a certain extend.MoK

    This overlooks the importance of dendritic input. which culminates in the neuron's nucleus. As to neuroplasiticiy, it can be rather explicitly understood to consist of new synaptic connections created by new outreachings of dendrites and axons. Otherwise the brain would remain permanently hardwired, so to speak, with the neural connections it has from birth till the time of its death. And I distinctly remember the latter being the exact opposite of neuroplasticity in the neuroscience circles I once partook of. So understood, neuroplaticity is contingent on individual neurons growing their dendrites and axons (via most likely trial and error means) toward new sources of synapse-resultant stimulation.

    I highly doubt that a neuron has a mind. But let's assume so for the sake of the argument. In which location in a neuron is the information related to what the neuron experienced in the past stored? How could a neuron realize options?MoK

    Same questions can be placed with equal validity of any individual ameba, for example. Point being, if you allow for "mind in life" as it would pertain to an ameba, there is no reason to not then allow the same for a neuron. The as of yet unknown detailed mechanism of how all this occurs in a lifeform devoid of a central nervous system being completely irrelevant to the issue at hand.

    How could a group of neurons work coherently if each is free?MoK

    Free from what? All I said is that an individual neuron can well be maintained to be sentient, hence hold a volition and mind (utterly minuscule in comparison to our own though it would then be). As to the issue of how can a plurality of sentient lifeforms work "coherently" - assuming that by "coherently" you meant cooperatively - I'm not sure what you're here expecting? How does a society of humans work cooperatively? A multitude of hypotheses could be offered, one of which is that of maximizing the well being of oneself via cooperation with others. Besides, as liver cells are built to work cooperatively in the liver as organ, for example, neurons are built to work cooperatively in the CNS as organ.
  • Patterner
    1.2k

    What is a mind? What does a mind do? This is from Journey of the Mind: How Thinking Emerged from Chaos, by Ogi Ogas and Sai Gaddam:
    A mind is a physical system that converts sensations into action. A mind takes in a set of inputs from its environment and transforms them into a set of environment-impacting outputs that, crucially, influence the welfare of its body. This process of changing inputs into outputs—of changing sensation into useful behavior—is thinking, the defining activity of a mind.

    Accordingly, every mind requires a minimum of two thinking elements:
    •​A sensor that responds to its environment
    •​A doer that acts upon its environment
    — Ogas and Gaddam
    They talk about the amoeba, which has the required elements.

    Obviously, these definitions of mind and thinking are as basic as can be. But it's where it all starts.

    Can a neuron be said to have a mind, to think, by these definitions?

    Or do you say a neuron has a mind because of some other definition?
  • javra
    2.8k
    Accordingly, every mind requires a minimum of two thinking elements:
    •​A sensor that responds to its environment
    •​A doer that acts upon its environment — Ogas and Gaddam

    They talk about the amoeba, which has the required elements.

    Obviously, these definitions of mind and thinking are as basic as can be. But it's where it all starts.

    Can a neuron be said to have a mind, to think, by these definitions?
    Patterner

    I don't see why not.

    The sensor aspect of thought so defined: the neuron via its dendrites senses in its environment of fellow neurons their axonal firings (axons of other neurons to which the dendrites of the particular neuron are connected via synapses) and responds to its environment of fellow neurons by firing its own axon so as to stimulate other neurons via their own dendrites.

    The doer aspect of thought so defined: the neuron's growth of dendrites and axon (which is requisite for neural plasticity) occurs with the, at least apparent, purpose of finding, or else creating, new synaptic connections via which to be stimulated and stimulate - this being a neuron's doing in which the neuron acts upon its environment in novel ways.

    To me, it seems to fit the definitions of mind offered just fine.
  • javra
    2.8k


    BTW, so its known, what I just wrote is a simplified model of the average neuron.

    Different neurons will have different physiology. Some neurons, for example, do not have an axon, at least not one that can be differentiated from its dendrites. (reference) Other neurons have over 1000 dendritic branches and the one axon. (reference) Still, they all (to my knowledge) sense dendritic input and act upon their environment in fairly blatant manners - thereby staying accordant to the definition of mind you've provided.

    Also: in fairness, my own general understanding of mind follows E. Thompson's understanding pretty closely, which he explains in great detail in his book "Mind in Life: Biology, Phenomenology, and the Sciences of Mind". The first paragraph from the book's preface given the general idea:

    THE THEME OF THIS BOOK is the deep continuity of life and mind.
    Where there is life there is mind, and mind in its most articulated forms
    belongs to life. Life and mind share a core set of formal or organiza-
    tional properties, and the formal or organizational properties distinc-
    tive of mind are an enriched version of those fundamental to life. More
    precisely, the self-organizing features of mind are an enriched version of
    the self-organizing features of life. The self-producing or “autopoietic”
    organization of biological life already implies cognition, and this incip-
    ient mind finds sentient expression in the self-organizing dynamics of
    action, perception, and emotion, as well as in the self-moving flow of
    time-consciousness.
    https://lchc.ucsd.edu/MCA/Mail/xmcamail.2012_03.dir/pdf3okBxYPBXw.pdf

    But the definitions of mind you've provided are far easier to express and to me work just fine.
  • MoK
    1.3k
    Same questions can be placed with equal validity of any individual ameba, for example. Point being, if you allow for "mind in life" as it would pertain to an ameba, there is no reason to not then allow the same for a neuron. The as of yet unknown detailed mechanism of how all this occurs in a lifeform devoid of a central nervous system being completely irrelevant to the issue at hand.javra
    I think that amoebas evolved in such a way to function as a single organism. Neurons are however different entities and they function together. Moreover, scientific evidence shows that a single amoeba can learn and remember. To my knowledge, no scientific evidence exists that a single neuron can learn or remember.
  • javra
    2.8k
    I think that amoebas evolved in such a way to function as a single organism. Neurons are however different entities and they function together.MoK

    Yes, but I don't see how that is significant to neurons being or not being sentient.

    Moreover, scientific evidence shows that a single amoeba can learn and remember. To my knowledge, no scientific evidence exists that a single neuron can learn or remember.MoK

    Here's an article from Nature to the contrary: Neurons learn by predicting future activity.
  • javra
    2.8k
    I thought this could be of interest, or at least further clarify the position I currently hold:

    Also: in fairness, my own general understanding of mind follows E. Thompson's understanding pretty closely,javra

    I should edit this as follows: this is so for certain aspects of mind – such as those that pertaining to single-celled lifeforms, be they somatic cells (e.g. neurons) or else individual organisms (e.g., ameba) – and somewhat less so for others: finding far more complexity than the book offers in relation to the workings of a human mind, for example (which we’ve previously briefly discussed in another thread).

    As one good example of this approach in regard to the sentience of an organism and that of its individual constituent cells:

    Most – including in academic circled – will acknowledge that a plant is sentient (some discussing the issue of plant intelligence to boot): It, after all, can sense sunlight and gravity such that it grows its leaves toward sunlight and its roots toward gravity. But, although this sensing of environment will be relatively global to the plant, I for the life of me can’t fathom how a plant might then have a centralized awareness and agency along the lines of what animals most typically have – such that in more complex animals it becomes the conscious being. I instead envision a plant’s sentience to generally be the diffuse sum product of the interactions between its individual constituent cells, such that each cell – with its own specialized functions - holds its own (utterly miniscule) sentience as part of a cooperative we term the organism, in this case the plant. This, in some ways, in parallel to how a living sponge as organism – itself being an animal – is basically just a communal cooperation between individual eukaryotic cells which feed together via the system of openings: with no centralized awareness to speak of. This general outlook then fits with the reality that some plants have no clear boundaries as organisms – despite yet sensing, minimally, sunlight and gravity - with grass as one such example: a field of grass of the same species is typically intimately interconnected underground as one organism, yet a single blade of grass and it’s root can live just fine independently as an individual organism if dug up and planted in a new area. I thereby take the plant to be sentient, but only as a cooperative of individual sentience-endowed plant cells whose common activities result in the doings of the plant as a whole organism: doing in the form of both sensing its environment and acting upon it (albeit far slower than most any animal). I don’t so far know of a better way of explaining a plant’s sentience given all that we know about plants.

    Whereas in animals such as humans, the centralized awareness and agency which we term consciousness plays a relatively central role to out total mind's doings – obviously, with the unconscious aspects of our mind being not conscious to us; and with the latter in turn resulting from the structure and functioning of our physiological CNS, which itself holds different zones of activity (from which distinct agencies of the unconscious mind might emerge) and which we consider body rather than mind. So once one entertains the sentience of neurons, one here thereby addresses the constituents of one's living body, rather than of one's own mind per se.



    My bad if this is too off-topic. I won't post anymore unless there's reason to reply.
  • MoK
    1.3k
    Here's an article from Nature to the contrary: Neurons learn by predicting future activity.javra
    That was an interesting article to read. I however have a serious objection to whether that is a collection of neurons that learns and adopts itself or each single neuron has such a capacity. Of course, if you assume that each neuron has such a capacity and plug it into the equation then you obtain that a collection of neurons also have the same capacity but the opposite is not necessarily true. I don't think that they have access to individual neuron activity when it comes to experiments too (although they mentioned neuron activity in the discussion for Figures 4 and 5). So I stick to what I think is more correct, a collection of neurons can learn but individual neurons cannot.
  • javra
    2.8k
    I understand you disagree and can find alternative explanations to a single neuron learning. One could do the same for amebas if one wants to play devil's advocate.

    If you're willing, what are the "serious objections" that you have to the possibility that individual neurons can learn from experience?
  • MoK
    1.3k
    Most – including in academic circled – will acknowledge that a plant is sentient (some discussing the issue of plant intelligence to boot): It, after all, can sense sunlight and gravity such that it grows its leaves toward sunlight and its roots toward gravity. But, although this sensing of environment will be relatively global to the plant, I for the life of me can’t fathom how a plant might then have a centralized awareness and agency along the lines of what animals most typically have – such that in more complex animals it becomes the conscious being. I instead envision a plant’s sentience to generally be the diffuse sum product of the interactions between its individual constituent cells, such that each cell – with its own specialized functions - holds its own (utterly miniscule) sentience as part of a cooperative we term the organism, in this case the plant. This, in some ways, in parallel to how a living sponge as organism – itself being an animal – is basically just a communal cooperation between individual eukaryotic cells which feed together via the system of openings: with no centralized awareness to speak of. This general outlook then fits with the reality that some plants have no clear boundaries as organisms – despite yet sensing, minimally, sunlight and gravity - with grass as one such example: a field of grass of the same species is typically intimately interconnected underground as one organism, yet a single blade of grass and it’s root can live just fine independently as an individual organism if dug up and planted in a new area. I thereby take the plant to be sentient, but only as a cooperative of individual sentience-endowed plant cells whose common activities result in the doings of the plant as a whole organism: doing in the form of both sensing its environment and acting upon it (albeit far slower than most any animal). I don’t so far know of a better way of explaining a plant’s sentience given all that we know about plants.javra
    I read about plant intelligence a long time ago and I was amazed. They cannot only recognize between up and down, etc. they also are capable of communicating with each other. I can find those articles and share them with you if you are interested.

    Whereas in animals such as humans, the centralized awareness and agency which we term consciousness plays a relatively central role to out total mind's doings – obviously, with the unconscious aspects of our mind being not conscious to us; and with the latter in turn resulting from the structure and functioning of our physiological CNS, which itself holds different zones of activity (from which distinct agencies of the unconscious mind might emerge) and which we consider body rather than mind.javra
    To me what you call the unconscious mind (what I call the subconscious mind) is conscious. Its activity most of the time is absent from the conscious mind though. But you can tell that the subconscious mind and conscious mind are constantly working with each other when you reflect on a complex process of thoughts for example. Although that is the conscious mind which is a thinking entity, it needs a constant flow of information from what was experienced and thought in the past. This information of course registered in the subconscious mind's memory. The amount of information that is registered in the subconscious mind's memory however is huge so the subconscious mind has to be very selective in the type of information that should be passed to the conscious mind depending on the subject of focus of the conscious mind. Therefore, the subconscious mind is an intelligent entity as well. I also think that what we call intuition is due to the subconscious mind!

    So once one entertains the sentience of neurons, one here thereby addresses the constituents of one's living body, rather than of one's own mind per se.javra
    I cannot follow what you are trying to say here.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.

×
We use cookies and similar methods to recognize visitors and remember their preferences.