I don't understand how in the case of Ameba they could possibly interact and learn collectively.I understand you disagree and can find alternative explanations to a single neuron learning. One could do the same for ameba is one wants to play devil's advocate. — javra
I try to be minimalistic all the time when I try to explain complex phenomena. The behavior of an electron is lawful and deterministic to me. The same applies to larger entities such as atoms and molecules. I try to be minimalistic even in the case of a neuron unless I face a phenomenon that cannot be explained. If I find myself in a troublesome situation where I cannot explain a phenomenon, then I try to dig from top to bottom questioning the assumption that I made trying to see where is the fault assumption. I would even question the assumption that I made for electrons as well if it is necessary.If you're willing, what are the "serious objections" that you have to the possibility that individual neurons can learn from experience? — javra
I read about plant intelligence a long time ago and I was amazed. They cannot only recognize between up and down, etc. they also are capable of communicating with each other. I can find those articles and share them with you if you are interested. — MoK
To me what you call the unconscious mind (what I call the subconscious mind) is conscious. — MoK
So once one entertains the sentience of neurons, one here thereby addresses the constituents of one's living body, rather than of one's own mind per se. — javra
I cannot follow what you are trying to say here. — MoK
I understand you disagree and can find alternative explanations to a single neuron learning. One could do the same for ameba is one wants to play devil's advocate. — javra
I don't understand how in the case of Ameba they could possibly interact and learn collectively. — MoK
In regards to the subject of this thread, the existence of options in a deterministic world, I found there is a simple explanation for the phenomenon once I consider a set of neurons each being a simple entity and deterministic. — MoK
Cool! :wink:I'm relatively well aware of this. Thank you. :up: It gets even more interesting in considering that, from what we know, subterranean communication between plants seems to require their communal symbiosis with fungi species. In a very metaphorical sense, their brains are underground, and communicate via a potentially wide web connections. — javra
Correct.I in many ways agree. I would instead state that the unconscious mind - which I construe to not always be fully unified in its agencies - is instead "aware and volition-endowed". So, in this sense, it could be stated to be in its own way conscious (here to my mind keeping things simple and not addressing the plurality of agencies that could therein occur), but we as conscious agents are yet unconscious of most of its awareness and doings. This being why I yet term it the unconscious mind: we as conscious beings are, again, typically not conscious of its awareness and doings. — javra
A neuron is a living cell. Whether it is sentient and can learn things is a subject of discussion. I believe a neuron could become sentient if this provided an advantage for the organism. This is however very costly since it requires the neuron to be a complex entity. Such a neuron, not only needs more food but also a sort of training before it can function properly within the brain where all neurons are complex entities. So, let's say that you have a single neuron, let's call it X, which can perform a function, let's call it Z, learning for example. Now let's assume a collection of neurons, let's call them Y, which can do the same function as Z but each neuron is not capable of performing Z. The question is whether it is economical for the organism, to have X or Y. That is a very hard question. It is possible to find an organism that does not have many neurons and each neuron can perform Z. That however does not mean that we can generalize such an ability to neurons of other organisms that have plenty of neurons. The former organism may due to evolution gain such a capacity where such a capacity is not necessary and economical for the latter organism.I basically wanted to express that, if one allows the neurons being sentient, their own sentience is part and parcel of our living brain's total physiology, this as aspects of our living bodies. Whereas we as mind-endowed conscious beings of our own, our own sentience is not intertwined with that pertaining to individual neurons of our CNS. Rather, they do their thing within the CNS for the benefit of their own individual selves relative to their community of fellow neurons, which in turn results in certain neural-web firings within our brain, which in turn results in the most basic aspects of our own unconscious mind supervening on these neural-web firings, with these most basic aspects of our unconscious mind then in one way or another ultimately combining to form the non-manifold unity of the conscious human being. A consciousness which on occasion interacts with various aspects of its unconscious mind, such as when thinking about (questioning, judging the value of, etc.) concepts and ideas - as you've mentioned. — javra
Thanks for the elaboration.Hope that makes what I previously said clearer. — javra
I said that for amebas to learn collectively, such as neurons, they need to interact.I haven't claimed that amebas can act collectively. — javra
I agree.Here, I was claiming that the so-called "problem of other minds" can be readily applied to the presumed sentience of amebas. This in the sense that just because it looks and sounds like a duck doesn't necessitate that it so be. Hence, just because an ameba looks and acts as thought it is sentient, were one to insist on it, one could argue that the ameba might nevertheless be perfectly insentient all the same. This as you seem to currently maintain for individual neurons. But this gets heavy into issues of epistemology and into what might constitute warranted vs. unwarranted doubts. (If it looks and sounds like a duck, it most likely is.) — javra
I agree that considering neurons to be sentient and can learn may not disrupt the function of the brain but I think that it might become very costly for the organism when a small set of simpler neurons can perform the same function, learning for example.No worries there. But why would allowing for neurons holding some form of sentience then disrupt this general outlook regarding the existence of options? The brain would still do what it does - this irrespective of how one explains the (human) mind-brain relationship. Or so I so far find. — javra
A neuron is a living cell. Whether it is sentient and can learn things is a subject of discussion. I believe a neuron could become sentient if this provided an advantage for the organism. This is however very costly since it requires the neuron to be a complex entity. Such a neuron, not only needs more food but also a sort of training before it can function properly within the brain where all neurons are complex entities. So, let's say that you have a single neuron, let's call it X, which can perform a function, let's call it Z, learning for example. Now let's assume a collection of neurons, let's call them Y, which can do the same function as Z but each neuron is not capable of performing Z. The question is whether it is economical for the organism, to have X or Y. That is a very hard question. It is possible to find an organism that does not have many neurons and each neuron can perform Z. That however does not mean that we can generalize such an ability to neurons of other organisms that have plenty of neurons. The former organism may due to evolution gain such a capacity where such a capacity is not necessary and economical for the latter organism. — MoK
By the way, I found a simple neural network that can perform a simple sum. — MoK
I have a tough time seeing it your way. I think an autonomous entity has - is - a mind. Archaea, bacteria, and amoeba live on their own. Neurons do not. I think neurons are part of a mind; part of the chain connecting the sensor and doer. In the archaea, being single celled, that chain is made of molecules. We couldn't (at least I couldn't) say any of the molecules are minds. And I think the neurons in a hydra are more complex links in the hydra's chain, rather than each being a mind within the mind of the hydra.Accordingly, every mind requires a minimum of two thinking elements:
•A sensor that responds to its environment
•A doer that acts upon its environment — Ogas and Gaddam
They talk about the amoeba, which has the required elements.
Obviously, these definitions of mind and thinking are as basic as can be. But it's where it all starts.
Can a neuron be said to have a mind, to think, by these definitions?
— Patterner
I don't see why not.
The sensor aspect of thought so defined: the neuron via its dendrites senses in its environment of fellow neurons their axonal firings (axons of other neurons to which the dendrites of the particular neuron are connected via synapses) and responds to its environment of fellow neurons by firing its own axon so as to stimulate other neurons via their own dendrites.
The doer aspect of thought so defined: the neuron's growth of dendrites and axon (which is requisite for neural plasticity) occurs with the, at least apparent, purpose of finding, or else creating, new synaptic connections via which to be stimulated and stimulate - this being a neuron's doing in which the neuron acts upon its environment in novel ways.
To me, it seems to fit the definitions of mind offered just fine. — javra
The italics are theirs, and the phrase is a link to a quote from The Computational Brain, by Patricia Churchland and Terrence Sejnowski:There are sensor neurons and doer neurons, which play the same roles as sensors and doers in molecule minds. Each neuron is composed of molecular thinking elements, including molecular doers (which release neurotransmitters into a synapse, for instance) and molecular sensors (which detect the voltage on the neuron membrane, for instance). Functionally, every neuron is a self-contained molecule mind. — Ogas and Gaddam
Research on the properties of neurons shows that they are much more complex processing devices than previously imagined. For example, dendrites of neurons are themselves highly specialized, and some parts can probably act as independent processing units. — Churchland and Sejnowski
I have a tough time seeing it your way. I think an autonomous entity has - is - a mind. Archaea, bacteria, and amoeba live on their own. Neurons do not. I think neurons are part of a mind; part of the chain connecting the sensor and doer. In the archaea, being single celled, that chain is made of molecules. We couldn't (at least I couldn't) say any of the molecules are minds. And I think the neurons in a hydra are more complex links in the hydra's chain, rather than each being a mind within the mind of the hydra. — Patterner
I think my difficulty lies in the fact that I haven't been at any of this for very long. I always took mind and consciousness to be pretty much the same thing. Intellectuality, I see a difference. But my feeling that they are the same still intrudes at times. I'm working on it. :grin: — Patterner
By the way, I found a simple neural network that can perform a simple sum. — MoK
how can deterministic processes lead to the realization of options. — MoK
Options cannot be an illusion. If I show you two balls that look similar, you will realize that there are two balls and that they look identical. There are even artificial neural networks that can count similar objects.Maybe the "options" are illusion. — ENOAH
I am not talking about decisions in this thread.The determinism in neural processes seem obvious to us since science has constructed that Narrative and it is conventional; i.e., that synapses are triggered by xyz, and there is no moment of an agent choosing to take a certain path. — ENOAH
The standard model was confirmed experimentally and it is a deterministic model. The experiment is performed very carefully so we are sure about how particles interact with each other. That is however true that when it comes to a system we cannot know the exact location of its parts so we cannot for sure predict the future state of the system but that is not what I am talking about. I am mostly interested in understanding how we could realize options given the fact that any physical system, for example the brain, is a deterministic entity. I am sure that the realization of options is due to the existence of neurons in the brain but it is still unclear to me how neural processes in the brain can lead to the realization of the options.The existence of possibilities is that which follows from the fact that any course of action is not given in advance. That is, that in a sense the world is always in play. No matter how well our expectations or predictions are fulfilled there is always something not given in becoming. We can foresee that the sun will die in X years, but nevertheless it is not given. To the extent that there is something not given, thought is able to think of possibilities, there is always something left over that escapes prediction. — JuanZu
We can for sure say that the physical systems are deterministic since physicists closely examine the motion and interaction of elementary particles. Anyway, the purpose of this thread was not to discuss determinism but to understand how we can realize options given the fact that we have a brain.The determinist has to explain how the future is given. But that is something that cannot be done, since predictions are always possibilities and are representations of becoming. How does a prediction turn out to be true? Even if it turns out to be true, it is still a representation of becoming and not becoming itself. That is why we cannot say that things are determined, because they are only determined in the representation but not in becoming itself. — JuanZu
I am saying that given the system in the state of X and the laws of nature, one always predicts and finds the system in the state of Y later.I think you have missed my point. If you tell me that there is a deterministic system that will end up in X state you are making a prediction. — JuanZu
I don't understand why you assume the system is not in the state of X. The system cannot be in another state but X which was predicted.But if the system is not in its state X the system prediction cannot be confused with reality. — JuanZu
The prediction is about what is going to happen in reality and the system always ends up in Y given X in a deterministic system.That is, the prediction is a representation not reality itself. — JuanZu
The is no other possibility in a reality. The determinism is tested to great accuracy.The prediction is one possibility among others, even if it is confirmed. — JuanZu
We don't need to test all processes of reality to make sure that reality is deterministic and that is not possible too.And this is due to the non-givenness of becoming. We could only be absolute determinists if all the processes of reality were already given. — JuanZu
We couldn't possibly do any science if this statement was true. For example, the computer you are using right now always works in a certain way. It doesn't work in one way one day and in another way another day.No matter how many experiments you do, predictions will always be imagined representations of what will happen, i.e. possibilities among others. — JuanZu
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.