• Joshs
    6.4k
    From an external point of view, cognition is private and indirect. From an internal point of view, cognition is public and direct. So Husserl and Descartes can be both semantically correct, provided that we don't mix their postulates and apply them in different contexts.sime

    Husserl’s point is that the external , third person point of view is a derived abstraction constituted within first person subjectivity.
  • boundless
    566
    That bothers me since it contradicts physicalism since there can be physical things that cannot be known, even in principle. Science cannot render to a non-bat, even in principle, what it's like to be a bat. So I would prefer a different definition.noAxioms

    OK. So what is 'physical' in your view? IIRC you also agree that physical properties are relational, i.e. they describe how a given physical object relate to/interact with other physical objects.
    'Scientistic physicalism' is also inconsistent IMO because, after all, that there is a physical world is not something we discover by doing science.

    Other than 'consciousness' I also believe in the existence of other things that are 'real' but not 'physical'. I am thinking, for instance, of mathematical truths. But this is perhaps OT.

    Materialism typically carries a premise that material is fundamental, hence my reluctance to use the term.noAxioms

    Ok, yes. But it does sometimes clarify at least a meaning that 'physical' can have. For instance, if by matter one means "whatever object exists in a given location of space in a given time", would you agree that this is also what you mean by 'physical'? Note that this would also include radiation not just 'matter' as the word is used by physicists.

    Has consciousness a 'definite location' in space, for instance?

    People have also questioned about how eyes came into being, as perhaps an argument for ID. ID, like dualism, posits magic for the gaps, but different magic, where 'magic is anything outside of naturalism. Problem is, anytime some new magic is accepted, it becomes by definition part of naturalism. Hypnosis is about as good an example as I can come up with. Meteorites is another. Science for a long time rejected the possibility of rocks falling from the sky. They're part of naturalism now.noAxioms

    OK. But IMHO you're thinking in excessively rigid categories. Either one is a 'physicalist/naturalist' or one accepts 'magic'. Maybe there is something that is not 'natural'. Again, mathematical truths seem to me exactly an example of something that is not natural and yet real. One would stretch too much the meaning of 'natural/physical' to also include mathematical truths in it.

    So, I guess that my response here can be summarized in one question for you: what do you mean by 'physical' (or 'natural') and why you think that consciousness is 'physical'?

    Agree.noAxioms

    :up:
  • Apustimelogist
    899

    Yes, I guess it depends on how easily convinced you are about this being case. For me, without further reason to believe otherwise, it seems like the biggest roadblocks in modelling something like the brain is intractable complexity. There is no indication that in principle we cannot someday model all our own behaviors and reports through computer models. I think even just looking at AI now indicates that there isn't really a conceivable limit on what they can do given enough power and the right inputs, which is what you might expect from something which is Turing complete: i.e. they can compute anything in principle.
  • Joshs
    6.4k

    There is no indication that in principle we cannot someday model all our own behaviors and reports through computer models. I think even just looking at AI now indicates that there isn't really a conceivable limit on what they can do given enough power and the right inputs, which is what you might expect from something which is Turing complete: i.e. they can compute anything in principle.Apustimelogist

    The results of modeling the brain on today’s computers, using today’s forms of computer logic, are precisely as you describe. And they will colossally miss what philosophers and psychologists are coming to appreciate is the central feature of brains; that they are embodied and enactive. So,no, it won’t be today’s generation of A.I. that can express this understanding, and it has nothing to do with power and inputs. In about 10 to 20 years, we will likely see the emergence of a different kind of A.I. operating according to a different logic, that of complex dynamical systems ( cds).

    Ultimately, CDS-based AI chips may blur the line between computation and physical processes, resembling intelligent materials more than traditional silicon. As Stephen Wolfram notes: “The most powerful AI might not be programmed; it might be cultivated, like a garden of interacting dynamical systems.”

    When AI chips fully integrate complex dynamical systems (CDS) models, they will likely diverge radically from today’s parallel architectures (e.g., GPUs, TPUs) by embodying principles like self-organization, adaptive topology, and physics-inspired computation. Here’s a speculative breakdown of their potential design and function:

    Architectural Shifts: From Fixed to Fluid.

    Current A.I. Chips:

    Fixed parallel cores (e.g., NVIDIA GPU clusters)
    Deterministic von Neumann logic
    Digital (binary) operations
    Centralized memory (RAM)

    Future CDS AI Chips:

    Reconfigurable networks of nano-scale nodes that dynamically form/break connections (like neural synapses).
    Nonlinear, chaotic circuit exploiting emergent stability (e.g., strange attractors).
    Analog/quantum-hybrid systems leveraging continuous dynamics (e.g., oscillatory phases).
    Distributed memory where computation and storage merge (like biological systems).
  • Mijin
    302
    Sure we do. Q3 is easy. The ball-catching robot was one. A fly evading a swat is another. If one is searching for a model, you start simple and work your way up to something as complex as how our experience works.noAxioms

    But I reject the breakdown into those three questions, if you're going to insist that neuroscience cannot ask Q2.

    The hard problem is Q2 and it is legitimate for science to want to know how a neural net can have experiences.

    It seems a bit pointless to me to keep deflecting from the hard problem to declare that there is no hard problem.
  • Apustimelogist
    899

    I mean, none of this has any relevance to any points I am making. Obviously, to artificially recreate a human brain to acceptable approximation, you need to construct this computational system with the kinds of inputs, kinds of architectures, capabilities, whatever, that a human does. I am not making any arguments based on specific assumptions about specific computing systems, just on what is in principle possible.
  • Joshs
    6.4k
    ↪Joshs
    I mean, none of this has any relevance to any points I am making. Obviously, to artificially recreate a human brain to acceptable approximation, you need to construct this computational system with the kinds of inputs, kinds of architectures, capabilities, whatever, that a human does. I am not making any arguments based on specific assumptions about specific computing systems, just on what is in principle possible.
    Apustimelogist

    I will say bluntly that no machine we invent will do what we do, which is to think. As Evan Thompson wrote:

    LLMs do not perform any tasks of their own, they perform our tasks. It would be better to say that they do not really do anything at all. Thus, we should not treat LLMs as agents. And since LLMs are not agents, let alone epistemic ones, they are in no position to do or know anything.

    The map does not know the way home, and the abacus is not clever at arithmetic. It takes knowledge to devise and use such models, but the models themselves have no knowledge. Not because they are ignorant, but because they are models: that is to say, tools. They do not navi-gate or calculate, and neither do they have destinations to reach or debts to pay. Humans use them for these epistemic pur-poses. LLMs have more in common with the map or abacus than with the people who design and use them as instruments. It is the tool creator and user, not the tool, who has knowledge.

    I think what he wrote about LLM’s applies to all of the devices we build. They are not separate thinking systems from us, they are and always will be our appendages, like the nest to the bird or the web to the spider.
  • boundless
    566
    :up: yeah, I often compare computers to highly sofisticated mechanical calculators. At the end of the day all LLMs are very complex computers and they operate according to algorithms (programmed by us) just like mechanical calculators.

    I don't think that many people would think that mechanical calculators or a windmill or mechanical clocks etc have 'awareness' or 'agency'. And computers just like them perform operations without being agents.

    In order to have consciousness, computers IMO would have to be aware of what they are doing. There is no evidence that have such of an awareness. All their activities can be explained by saying that they just do what they are programmed for.
  • Apustimelogist
    899
    I will say bluntly that no machine we invent will do what we do, which is to think.Joshs

    I don't see the grounds for such a statement. A brain is just a certain kind of machine, and it thinks. If brains exist, then in principle you can build one. LLMs don't have a lot of things humans have, but doesn't mean that in principle you could build machines that do.

    and they operate according to algorithms (programmed by us) just like mechanical calculators.boundless

    And you don't think we do? Our brains are bundles of neurons which all work in very similar ways. You could easily make an argument that we operate in accordance with some very basic kind or family of algorithms recapitulated in many different ways across the brain.

    All their activities can be explained by saying that they just do what they are programmed for.boundless

    As can a human brain.
  • noAxioms
    1.7k
    The hard problem is Q2 and it is legitimate for science to want to know how a neural net can have experiences.Mijin
    I can accept that.'


    OK. So what is 'physical' in your view? IIRC you also agree that physical properties are relational, i.e. they describe how a given physical object relate to/interact with other physical objects.boundless
    It means that all energy and particles and whatnot obey physical law, which yes, pretty much describes relations. That's circular, and thus poor. It asserts that this description is closed, not interfered with by entities not considered physical. That's also a weak statement since if it was ever shown that matter had mental properties, those properties would become natural properties, and thus part of physicalism.
    So I guess 'things interact according to the standard model' is about as close as I can get. This whole first/third person thing seems a classical problem, not requiring anything fancy like quantum or relativity theory, even if say chemistry would never work without the underlying mechanisms. A classical simulation of a neural network (with chemistry) would be enough. No need to simulate down to the molecular or even quantum precision.

    'Scientistic physicalism' is also inconsistent IMO because, after all, that there is a physical world is not something we discover by doing science.
    That's a philosophical stance, I agree.

    Other than 'consciousness' I also believe in the existence of other things that are 'real' but not 'physical'. I am thinking, for instance, of mathematical truths.
    OK. Not being a realist, I would query what you might mean by that. I suspect (proof would be nice) that mathematical truths are objectively true, and the structure that includes our universe supervenes on those truths. It being true implying that it's real depends on one's definition of 'real', and I find it easier not to worry about that arbitrary designation.

    But it does sometimes clarify at least a meaning that 'physical' can have. For instance, if by matter one means "whatever object exists in a given location of space in a given time", would you agree that this is also what you mean by 'physical'?
    Is space and time not physical then? Neither meets your criteria of 'object', but I think I would include them under 'physicalism'. Not all universes have them, and those might have very different definitions of what is physical or material.

    Has consciousness a 'definite location' in space, for instance?boundless
    Me considering that to be a process of material that has a location, it seems reasonably contained thus, yes. Not a point mind you, but similarly a rock occupies a region of space and time.

    IMHO you're thinking in rigid categories. Either one is a 'physicalist/naturalist' or one accepts 'magic'.
    Right.' Science cannot make progress with an attitude like that. Most magic is replaced by natural explanations, but occasionally 'magic' explanations are adopted as part of naturalism. I gave a couple examples of that.
    By magic, I mean an explanation that just says something unknown accounts for the observation, never an actual theory about how this alternate explanation might work. To my knowledge, there is no theory anywhere of matter having mental properties, and how it interacts with physical matter in any way. The lack of that is what puts it in the magic category.

    Maybe there is something that is not 'natural'. Again, mathematical truths seem to me exactly an example of something that is not natural and yet real.
    That seems to be like saying atoms are not real because they're not made of rocks.

    One would stretch too much the meaning of 'natural/physical' to also include mathematical truths in it.boundless
    I agree, since those truths hold hopefully in any universe, but our natural laws only work in this one (and similar ones).

    why you think that consciousness is 'physical'?
    I've seen no evidence from anybody that physical interactions cannot account for it. Sure, it's complex and we don't know how it works. But that it cannot work? That's never been demonstrated.

    At the end of the day all LLMs are very complex computers and they operate according to algorithms (programmed by us) just like mechanical calculators.boundless
    I can argue that people also are this, programmed by ancestors and the natural selection that chose them. The best thinking machines use similar mechanisms to find their own best algorithms, not any algorithm the programmer put there. LLM is indeed not an example of this.


    I can see how electrons moving from atom to atom is electricity.
    I can see how the movement of air molecules is heat and pressure.
    I can see how the movement of an object is force: F=ma.
    I can see how a fluid, whether liquid or gas, flowing around an object creates lift, which is a factor in flight.

    All of those examples are physical activities
    I don't see how self-awareness is a physical activity
    Patterner
    You understand the former because those are quite trivial interactions. Then you jump to something with complexity beyond the current state of science. But not understanding how something works is not any sort of evidence that it isn't still a physical process.

    The game playing machine beats everybody at Go. Nobody, not even its creators, know how it works. It wasn't taught any strategy, only the rules. It essentially uses unnatural selection to evolve an algorithm that beats all opponents. That evolved (hard deterministic) algorithm is what nobody understands, even if they look at the entire data map. But nobody concludes that it suddenly gets access to magic. Such a conclusion comes with an obvious falsification test.


    Descartes asks "What can I know with certainty?" while Husserl asks "How does anything come to be given to consciousness at all?"​​​​​​​​​​​​​​​​Joshs
    Not only am I not certain about what Descartes knows with certainty, but I actually find the conclusion unlikely. Of course I have access to science that he doesn't.
    As for 'come to be given to ...", that seems the conclusion is already drawn, and he's trying to rationalize how that might work.


    from something which is Turing complete: i.e. they can compute anything in principle.Apustimelogist
    Something Turing complete can compute anything a Turing machine can, which is a lot, but not anything. Technically nothing is Turing complete since a Turing machine has infinite data on which to operate.
    Such machines are a model of capability, but not in any way a model of efficiency. Nobody makes one to get any actual work done, but it's wicked interesting to make one utilizing nonstandard components like train tracks.

    As Stephen Wolfram notes: “The most powerful AI might not be programmed; it might be cultivated, like a garden of interacting dynamical systems.”​​​​​​​​​​​​​​​​Joshs
    I like that quote.

    Obviously, to artificially recreate a human brain to acceptable approximation, you need to construct this computational system with the kinds of inputs, kinds of architectures, capabilities, whatever, that a human does.Apustimelogist
    Were I to simulate a human, I'd probably not give it inputs at all. Almost all simulations I've run do it stand-alone with no input at all. Logged output for later analysis, but that doesn't affect the simulation. Of course this means your simulated person needs to be in a small environment, also simulated.

    I will say bluntly that no machine we invent will do what we do, which is to think.​​​​​​​​​​​​​​​​Joshs
    Noted. How very well justified. Your quote is about LLMs which are mildly pimped out search engines. Compare that do devices which actually appear to think and to innovate. What do you call it if you refuse to apply the term 'think' to what it's doing?

    The quote goes on to label the devices as tools. True now, but not true for long. I am arguably a tool since I spent years as a tool to make money for my employer. Am I just a model then?


    Postmodern philosophy has become like Big Pharma, in that the latter creates ailments to sustain medicinal inventions while the former creates scenarios bordering on superfluous overreachMww
    Nice analogy. It explains Chalmers' motivation for creating a problem where there really isn't one.
  • boundless
    566
    And you don't think we do? Our brains are bundles of neurons which all work in very similar ways. You could easily make an argument that we operate in accordance with some very basic kind or family of algorithms recapitulated in many different ways across the brain.Apustimelogist

    No, I don't and you don't here provided sufficient evidence to convince me of your view. Rather, it seems to me that, given the impressive results we have obtained with computers you are concluding that our congition is also algorithmic.

    I believe that there is a difference between conscious - and in general living - beings and algorithmic devices. All living beings seem to have a 'sense' of unity, that there is a distinction between 'self' and 'not self' and so on. They do not just 'do' things.

    Regardless, I don't think there is any consensus on this topic among scientists. So, after all in a way both our positions are speculative.

    As can a human brain.Apustimelogist

    Says you. To me there is a clear difference between how human cognition works and how, say, a mechanical calculator works. And I am open to the idea that, perhaps, our cognition can't even be wholly comprehended by mathematical models, let alone only algorithms.
  • boundless
    566
    t means that all energy and particles and whatnot obey physical law, which yes, pretty much describes relations. That's circular, and thus poor. It asserts that this description is closed, not interfered with by entities not considered physical. That's also a weak statement since if it was ever shown that matter had mental properties, those properties would become natural properties, and thus part of physicalism.So I guess 'things interact according to the standard model' is about as close as I can get. This whole first/third person thing seems a classical problem, not requiring anything fancy like quantum or relativity theory, even if say chemistry would never work without the underlying mechanisms. A classical simulation of a neural network (with chemistry) would be enough. No need to simulate down to the molecular or even quantum precision.noAxioms

    Ok for the definition! Yes, and GR seems to imply that both spacetime and 'what is inside of it' are 'physical/natural'. i disagree with your view that mathematical truths are 'natural', though. They seem to be independent of space and time. That our minds are not 'natural' (in this broad sense) is perhaps more controversial. But the fact that we can know mathematical truths is quite interesting if we are 'wholly natural' (I do not know...). It seems to me that however it is better to reframe the 'hard problem' in a different way: can consciousness arise from what is completely inanimate?

    The confidence you have in the power of algorithms seems to arise from anunderlying assumption that every natural process is 'algorithmic'. Of course, I am not denying the enormous success of algorithmic models and simulations but I am not sure that they can ever be able to give us a completely accurate model/simulation of all processes.

    I admit that I can't give you a scientific argument against your assumption. But for me my phenomenological experience strongly suggests otherwise (self-awareness, the ability to choose and so on do not seem to be easily explainable in terms of algorithms).

    OK. Not being a realist, I would query what you might mean by that. I suspect (proof would be nice) that mathematical truths are objectively true, and the structure that includes our universe supervenes on those truths. It being true implying that it's real depends on one's definition of 'real', and I find it easier not to worry about that arbitrary designation.noAxioms

    I lean towards a form of platonism where mathematical truths are concepts and yet are timeless and indipendent of space. it seems the only position that makes sense considering the following: the fact that we know them as concepts, the incredible success that mathematical laws have in describing the behaviour of physical processes, the apparently evident 'eternity' of mathematical truths (that there are infinite prime numbers seems to me indipendent from any human discovery of such a fact for instance) and so on.

    Of course, I am under no illusion that I can give an absolutely convincing argument of my view (as often happens in philosophy) but it seems to me the best view after weighing the aguments in favour and against it.

    Me considering that to be a process of material that has a location, it seems reasonably contained thus, yes. Not a point mind you, but similarly a rock occupies a region of space and time.noAxioms

    Ok. In a general sense, yeah I perhaps can agree with you that mind is natural or even 'physical'. But it has quite peculiar attributes that it is difficult to explain as arising from 'inanimate' matter. And, as I said before, it seems to have the capacity to understand/know 'something' that is not 'natural'.

    By magic, I mean an explanation that just says something unknown accounts for the observation, never an actual theory about how this alternate explanation might work. To my knowledge, there is no theory anywhere of matter having mental properties, and how it interacts with physical matter in any way. The lack of that is what puts it in the magic category.noAxioms

    Ok, I see. But consider that under this definition you risk to include inside 'magic' many partial or unclear explanations that I would not include into that word. In other words, your category of 'magic' seems excessively broad.

    For instance, if we were talking in the 14th century and you claimed that 'atoms' exist and 'somehow' interact with forces that we do not know to form the visible objects, would be this 'magic' (of course, you have to imagine yourself as having the scientific knowledge of the time)?

    I can argue that people also are this, programmed by ancestors and the natural selection that chose them. The best thinking machines use similar mechanisms to find their own best algorithms, not any algorithm the programmer put there. LLM is indeed not an example of this.noAxioms

    Am I wrong to say that, however, that the operations of these 'thinking machines' are completely explainable in terms of algorithms?
    As I said in my previous post, I can't neglect the fact that my own self-awareness, the experience of self-agency and so on seem to point that we are not like that.
  • RogueAI
    3.4k
    A fly evading a swat is another.noAxioms

    Is there something it's like to be a fly evading a swat? How do we know? How could we ever find out? Isn't the inability to answer those questions a "hard problem"?
  • Apustimelogist
    899
    No, I don't and you don't here provided sufficient evidence to convince me of your view. Rather, it seems to me that, given the impressive results we have obtained with computers you are concluding that our congition is also algorithmic.boundless

    How would you interpret the fact that our brain (or at least the component that seems involved in processing information and long distance message-passing) is almost entirely composed of the same kind of cell with the same fundamental underlying physiological and anatomical structures and mechanism in terms of membrane potentials that induce action potentials.

    We don't have a deep understanding in which we can build detailed realistic functioning models of exactly what human brains are doing and why but we have a reasonably good basis for understanding the kind of information processing principles that underlie what neurons do such as in terms of efficient, sparse, predictive coding using recurrent connectivity. And really, LLM architectures work under very similar basic principles to what neurons do which is just prediction. You can fidn studies that the same kind of models used for LLMs are actually really very good at predicting neural responses to things like language processing because fundamentally they are doing the same thing, prediction.

    All living beings seem to have a 'sense' of unity, that there is a distinction between 'self' and 'not self' and so on. They do not just 'do' things.boundless

    There is no reason to think that these things can't be achieved with the same fundamental processes that transformers already use... why? Because they work in the same way brains do. The difference is that all LLMs are trained to do is predict words. Human brains don't just predict but act and control themselves; not just that, but these things are clearly biased, in terms of the evolutionarily-conserved structure of the brain itself, for very biologically specific control (i.e. what you would call homeostatic and allostatic). But the point is that there is no reason to think these things cannot be performed by the same processes that fundamentally underlie what transformers and LLMs do if you just structure them or design them in a way that allows them to do that. It would be surprising if they didn't imo because that seems to be what brains do. Neurons share the same core fundamental physiological, anatomical, functional properties, and there is the same kinds of interplay between excitation and inhibition, that are used for everything from homeostatic regulatory responses from the hypothalamus and midbrain to visual processing, motor control, language, executive functions, emotion and emotional control. There is of course a great variety in neurons and structures across the brain but they all share fundamental commonalities with some shared core which is virtually ubiquitious.
  • hypericin
    1.9k
    The primary disconnect seems to be that no third-person description can convey knowledge of a first-person experiencenoAxioms

    Without reading the full post, this misses the problem.

    The problem is, no third person explanation can arrive at first person experience. There is an 'explanatory gap'. Not only do we not know the specific series of explanations that start at neural facts and ends at first person experience, conceptually, it doesn't seem possible that any such series can exist.
  • Patterner
    1.7k
    But not understanding how something works is not any sort of evidence that it isn't still a physical process.noAxioms
    Maybe so. But not understanding how it works is certainly not any sort of evidence that it is a physical process.


    I'm wondering if you can tell me how this works. Or tell me what's wrong with my understanding.

    This is what Google AI says about the release of neurotransmitters:
    1. Arrival of Action Potential:
    The action potential travels down the axon of the presynaptic neuron and reaches the axon terminal.

    2. Calcium Influx:
    The arrival of the action potential opens voltage-gated calcium channels at the axon terminal.
    Calcium ions (Ca2+) flow into the neuron.

    3. Fusion of Synaptic Vesicles:
    Ca2+ binds to proteins on the synaptic vesicles, which are small membrane-bound structures containing neurotransmitters.
    This binding triggers the fusion of the synaptic vesicles with the presynaptic membrane.

    4. Neurotransmitter Release:
    As the vesicles fuse, the neurotransmitters are released into the synaptic cleft, the space between the presynaptic and postsynaptic neurons.

    5. Diffusion and Binding:
    The released neurotransmitters diffuse across the synaptic cleft and bind to receptors on the postsynaptic neuron.

    6. Termination of Neurotransmitter Action:
    Neurotransmitters are eventually removed from the synaptic cleft by reuptake into the presynaptic neuron, enzymatic breakdown, or diffusion away from the receptors.

    Here's what it says about the first step - Action Potential:
    Resting Membrane Potential: In a resting neuron, the inside of the cell is more negative than the outside, establishing a resting membrane potential (around -70 mV).

    Threshold: A stimulus, often in the form of chemical signals from other neurons (neurotransmitters), causes the membrane to depolarize (become less negative). If this depolarization reaches a critical "threshold" level (e.g., -55 mV), it triggers an action potential.

    Depolarization: At threshold, voltage-gated sodium channels open rapidly, allowing a large influx of positively charged sodium ions into the cell. This makes the inside of the neuron rapidly more positive.

    Repolarization: Sodium channels then inactivate, and voltage-gated potassium channels open, allowing positively charged potassium ions to flow out of the cell. This efflux of potassium ions causes the membrane potential to become more negative again, moving it back towards the resting potential.

    Hyperpolarization: The potassium channels may remain open a bit longer than needed, causing the membrane potential to dip below the resting potential before they close.

    Return to Rest: Finally, ion pumps (like the sodium-potassium pump) restore the resting membrane potential, preparing the neuron for another action potential.

    You say all of this, along with whatever other processes are taking place, is a description of not only things like receiving sensory input and distinguishing wavelengths of the electromagnetic spectrum, and receptors on my tongue distinguishing molecules that have made contact, but also seeing the color red, and tasting the sweetness of sugar. More than that, it's a description of my thoughts.

    My thoughts is what I'm really wondering about at the moment. That kind of activity is why I'm thinking of the number 7. And, over the next several seconds, due to the laws of physics working on what was there, the arrangements of all the particles of my brain change from X to Y to X. And those arrangements just happen to mean "+ 18 = 25".

    The same could be said for any thoughts I ever have, mathematical or otherwise.

    Of course, it's not simply one chain of neurons involved in a thought. I wouldn't care to guess how many are involved in any given thought. Or how many are involved multiple times in a single thought. There's probably all kinds of back tracking and looping.

    How does all that work??? In particular:

    -How do progressions of arrangements of all the particles in my brain mean all the things they mean?

    -How do all the action potentials and releasings of neurotransmitters coordinate throughout the brain, or an area of the brain, so that X , Y, and a million others happen at the same time in order to bring about the needed thought? (I could understand if one specific neuron initiated it all, so that the timing would be assured. But that would mean the single neuron already had the thought, and initiated all the activity to, shall we say, actualize(?) the thought. But that whole idea is a bit silly.)
123Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.