• tim wood
    9.3k
    Deleted by author
  • Mww
    4.9k
    We learn by abstraction from experience.Dfpolis

    Hmmmm, yes. I see. I see you’re talking about learning, I’m talking about understanding.

    I understand the abstraction of numbers. Like a concrete number without its denominant. Such being so, how do you suppose culturally differentiated systems find a commonality in their respective analysis? What is the same for a child here and now arriving at “5”, and a medieval Roman child arriving at “V”?

    My question would be, if we had a priori "knowledge," what reason would we have to believe that it applied to the world of experience?Dfpolis

    One reason to believe would be, the world of experience satisfies some prerogatives that belong to a priori truths, re: one doesn’t need the experience of a severe car crash to know a severe car crash can kill him. But general a priori truths have nothing whatsoever to do with experience (hence the standing definition), but are sustained by the principles of universality and necessity, for which experience can never suffice, re: two parallel lines can never enclose a space. I think it’s more significant, not that we do know some truths a priori, but that we can.
  • Dfpolis
    1.3k
    Please excuse the delay, I've had some sort of a tiring "bug."

    So, something like aristotelian realism about universals?aporiap

    Exactly.

    I'm not familiar with terms like 'notes of comprehension' or 'essential notes'.aporiap

    You might think of an object's notes of intelligibility as things that can be known and predicated of the object. Notes of comprehension would be those actually understood and constituting some abstraction. "Essential notes" would be notes defining an object -- placing it into a sortal.

    You say that logical distinction is predicated on the fact that intentional objects like concepts are different from materiality not ontologically but by virtue of not sharing these notes of comprehension.aporiap

    Yes. Most of what we think about are ostensible unities. We can "point them out" in some way, and they have some intrinsic integrity these are Aristotle's substances (ousia). Examples are humans, galaxies, quanta, societies, etc. Clearly some are more unified than others, but all have some dynamic that allows us to think of them as wholes.

    Extended wholes can be divided and so their potential parts are separable. Logical distinction does not depend on physical separability, but on having different notes of comprehension. The material and form of a ball are inseparable, but they are distinct, because the idea of form abstracts away the object's matter and that of matter abstracts away its form. So, that we can think of humans as material and intentional does not mean that that are composed of two substances any more than balls are.

    I mentioned in the post that it poses a problem for programs which require continual looping or continual sampling. In this instance the program would cease being an atmospheric sampler if it lost the capability of iteratively looping because it would then loose the capability to sample [i.e. it would cease being a sampler.]aporiap

    This is incorrect. Nothing in my argument prevents any algorithm from working. Another way of thinking about the argument is that it shows that consciousness is not algorithmic. In this particular case, if we want to sample every 10 ms. and removing and replacing the instruction takes 1 ms (a very long time in the computer world), all we need to do is speed up the clock by 10%.

    The critical question is whether it is the presence or the operation of the program that would cause consciousness. It it is difficult to believe that the non-operational presence of the algorithm could do anything. It is also hard to think of a scenario in which the execution of one step (the last step of the minimal program) could effect consciousness.

    Let's reflect on this last. All executing a computer step does is effect a state transition from the prior state S1 to a successor state S2. So if the program is to effect consciousness, all we need to do is start the machine in S1 and effect the transition to S2. Now it is either the S1-S2 transition itself that effects consciousness, or it is being in S2 that effects consciousness. If it is being in S2 that effects consciousness, we do not need a program at all, we only need to start the machine in S2 and leave it there. It is hard to see how see how such a static state could model, let alone effect consciousness.

    So, we are left with the possibility that a single step, that which effects the S1-S2 transition magically causes consciousness. This is the very opposite of the original idea that a program of sufficient complexity might consciousness. It shows that complexity is not a fruitful hypothesis.

    What do you mean they solve mathematical problems only? There are reinforcement learning algorithms out now which can learn your buying and internet surfing habits and suggest adverts based on those preferences. There are learning algorithms which -from scratch, without hard coded instruction- can defeat players at high-level strategy games, without using mathematical algorithms.aporiap

    They do use mathematical algorithms, even if they are unclear to the end user. At the most fundamental level, every modern computer is a finite state machine, representable by a Turing machine. Every instruction can be represented by a matrix which specifies, for every state, that if the machine is in state Sx it will transition to state Sy. Specific programs may also be more or less mathematical at higher levels of abstraction. The internet advertizing programs you mention represent interests by numerical bins and see which products best fit your numerical distribution of interests. Machine learning programs often use mathematical models of neural nets, generate and test algorithms and host of other mathematical methods depending on the problem faced.

    Also I don't get the point about why operating on reality representations somehow makes data-processing unable to be itself conscious. The kind of data-processing going on in the brain is identical to the consciousness in my account. It's either that or the thing doing the data processing [i.e. the brain] which is [has the property of] consciousness by virtue of the data processing.aporiap

    If does not mean that machines cannot be consciousness. It is aimed that the notion that if we model the processes that naturalists believe cause consciousness we would generate consciousness. An example of this is the so-called Simulation Hypothesis.

    Take an algorithm which plays movies for instance. Any one iteration of the loop outputs one frame of the movie... The movie, here, is made by viewing the frames in a sequential order.aporiap

    I think my logic is exhaustive, but I will consider your example. The analogy fails because of the nature of consciousness, which is the actualization of intelligibility. While much is written about the flow of consciousness, the only reason it flows is because the intelligibility presented to it changes over time. To have consciousness, we need two factors: contents, and awareness of contents. There is no need for the contents to change to have consciousness so defined. The computational and representational theories of mind have a good model of contents, but no model of awareness.

    But, if it can't be physical, and it's not data processing, what is the supposed cause?

    I don't think the multiple realization argument holds here.. it could just be something like a case of convergent evolution, where you have different configurations independently giving rise to the same phenomenon - in this case consciousness. Eg. cathode ray tube TV vs digital TV vs some other TV operate under different mechanisms and yet result in the same output phenomenon - image on a screen.
    aporiap

    Convergent evolution generally occurs because certain forms are best suited to certain ends/niches and because of the presumably limited range of expression of toolkit genes. In other words because of physical causal factors.

    Still, I don't think your response addresses my question which was if the cause of hypothetical machine consciousness is not physical and it is not data processing, what is it?

    What makes different implementations of TV pictures equally TV pictures is not some accident, but that they are products with a common design goal. So, I have two questions:
    1. What do you see as the explanatory invariant in the different physical implementations?
    2. If the production of consciousness is not a function of the algorithm alone, in what sense is this (hypothetical) production of consciousness algorithmic?

    I am not in the field of computer science but from just this site I can see there are at least three different kinds of abstract computational models. Is it true that physical properties of the machine are necessary for all the other models described?aporiap

    Yes, there are different models of computation. Even in the seminal days of computation, there were analogue and digital computers. Physical properties are not part of the computation models in the article you cite. If you read the definitions of the model types, you will see that, after Turing Machines, they the are abstract methods, not even (abstract) machine descriptions.

    I have been talking about finite state machines, because modern computers are the inspiration of computational theories of mind, and about Turing machines because all finite state machine computations can be done on a Turing machine, and its simplicity removes the possibility if confusing complex machine design with actual data processing. I think few people would be inspired to think machines could be conscious if they had to watch a Turing machine shuttle its tape back and forth.

    Even if consciousness required certain physical features of hardware, why would that matter for the argument since your ultimate goal is not to argue for the necessity of certain physical properties for consciousness but instead for consciousness as being fundamentally intentional and (2) that intentionality is fundamentally distinct from [albeit co-present with] materiality.aporiap

    All the missing instruction argument does is force one to think though why, in a particular case, materiality cannot provide us with intentionality. It moves the focus from the abstract to the concrete.

    I actually think my personal thought is not that different to yours but I don't think of intentionality as so distinct as to not be realized by [or, a fundamental property of] the activity of the physical substrate. My view is essentially that of Searle but I don't think consciousness is only limited to biological systems.aporiap

    We are indeed close. The problem is that there are no abstract "physical substrates." The datum, the given, is that there are human beings who perform physical and intentional acts. Why shoehorn intentionality into physicality with ideas such as emergence or supervenience? Doing so might have social benefits in some circles, but neither provides an explanation or insight into the relevant dynamics. All these ideas do is confuse two logically distinct concepts.

    Naturalists would like to point to an example of artificial consciousness, and say "Here, that was not so hard, was it? We don't need any more than a good understanding of (physics, computer science, neural nets, ...) {choose one}. Of course, there is no example to point to, and if there were one, how could we possibly know there was?

    If you want a computer to tell you it's self-aware, I can write you a program in under five minutes that will do so. If you find that too unconvincing, I could write you one that outputs a large random of digits of pi before outputting "I wonder why I'm doing this?" Would such "first-person testimony" count as evidence of consciousness? If not, what would? Not the "Turing test," which Turing recognized was only a game.

    I don't understand why a neuron not being conscious but a collection of neurons being conscious automatically leads to the hard problem.aporiap

    I don't think it does. I think Chalmers came to the notion of the "Hard Problem" by historical reflection -- seeing the lack of progress over the last 2500 years. I am arguing on the basis of philosophical analysis that it is not a problem, but a chimera.

    Searle provides a clear intuitive solution here in which it's an emergent property of a physical system in the same way viscosity or surface tension are emergent from lower-level interactions- it's the interactions [electrostatic attraction/repulsion] which, summatively result in an emergent phenomenon [surface tension] .aporiap

    The problem is that consciousness is not at all emergent in the sense in which viscosity and surface tension are. We know the microscopic properties that cause these microscopic properties, and can at least outline how calculate them. They are not at all emergent in the sense of appearing de novo.

    We understand, fairly well, how neurons behave. We know the biomechanics of pulse propagation and understand how vescules burst to produce release neurotransmitters. We have neural net models that combine many such neurons to provide differential responses to different sorts of stimulation and understand how positive and negative feed back can be used to improve performance -- modelling "learning" in the sense of useful adaptation.

    None of this gives us any hint as to how any combination of neurons and/or neural nets can make the leap into the realm of intentionality -- for the simple reason that none of our neuroscientific knowledge addresses the "aboutness" (reference) relevant to the intentional order.

    There is an equivocation on "emergence" here. In the case of viscosity and surface tension, what "emerges" is known to be potential at the microlevel. In the case of consciousness, nothing in our fairly complete understanding of neurons and neural nets hints at the "emergence" of consciousness. Instead of the realization of a known potential, we have the coming to be of a property with no discernible relation to known microstructure.

    Well the retinal state is encoded by a different set of cells than the intentional state of 'seeing the cat' - the latter would be encoded by neurons within a higher-level layer of cells [i.e. cells which receive iteratively processed input from lower-level cells] whereas the raw visual information is encoded in the retinal cells and immediate downstream area of early visual cortex. You could have two different 'intentional states' encoded by different layers of the brain or different sets of interacting cells. The brain processes in parallel and sequentiallyaporiap

    Let's think this through. The image of the cat modifies my retinal rods and cones, which modification is detected by the nervous system in whatever detail you wish to consider. So, every subsequent neural event inseparably caries information about both my modified retinal state and about the image of the cat because they are one and the same physical state. I cannot have an image of the cat without a modification of my retinal state, and the light from the cat can't modify my retinal state without producing an image of the cat.

    So, we have one physical state in my eye, which is physically inseparatable from itself, but which can give rise to two intentional states <the image of the cat> and <the modification of my retinal state>.

    Of course, once the intellect has distinguished the diverse understandings into distinct intentional states and we start to articulate them, the articulations will have different physical representations. But, my point is that no purely physical operation can separate one physical state into two intentional states. Any physical operation will be performed equally on the state as the foundation for both intentional states, and so cannot separate them.

    Okay but you seem to imply in some statements that the intentional is not determined by or realized by activity of the brain.aporiap

    That is because I hold, as a matter of experience and analysis, that the physical does not fully determine the intentional. I first saw this point pressed by Augustine in connection with sense data not being able to force itself on the intellect.. Once I saw the claim, I spent considerable time reflecting on it.

    Consider cases of automatic processing, which show that we can respond appropriately to complex sensory stimuli without the need for intellectual intervention. Ibn Sina gives citara players as his example, Lotze offers writing and piano playing as his, Penrose points to people who carry on conversations without paying attention, J. J. C. Smart proffers bicycle riding. So, clearly sensory data and its processing does not force itself on awareness.

    The evidence for "the unconscious mind" similarly shows that data processing and response can occur without awareness. Most of us have been exposed to Freudian case studies at some point. Graham Reed has published studies of time-gap experiences in which we become aware of the passage of time after being lost in thought. Jacques Hadamard provides us with an example of unconscious processing in Poincare's solution to the problem of Fuchsian functions.

    In Augustine's model, rather then the physical forcing itself on the intellect, we do not become aware until the will turns the intellect's attention to the intelligible contents. This seems to me to best fit experience.

    I would say intentional state can be understood as some phenomenon that is caused by / emerges from a certain kind of activity pattern of the brain.aporiap

    What kind?

    Of course the measurables are real and so are their relations- which are characterized in equations; but the actual entities may just be theoretical.aporiap

    While I know what theoretical constructs are, I am unsure what you mean by the measurables if not the "actual entities." How can you measure what does not exist?

    I was trying to say that introspection is not the only way to get knowledge of conscious experience. I'm saying it will be possible [one day] to scan someone's brain, decode some of their mental contents and figure out what they are feeling or thinking.aporiap

    I never give much weight to "future science."
    ------------

    The more accurate thing to say is that there are neurons in higher-level brain regions which fire selectively to seemingly abstract stimuli.aporiap

    I have no problem with this in principle. Neural nets can be programed to do this. That does not make either subjectively aware of anything.

    That seems to account for the intentional component no?aporiap

    How? You need to show that this actualizes the intelligible content of the conscious act.
  • Mattiesse
    20
    Everyone lives in their own world, their own reality and/or sense of what’s around them. Influenced by parents, peers, siblings, family and friends, events, environment etc can all affect the persons brain and body physically and mentally. Leaving you a person who simply acts out on what there brain has held onto. Seems more like a 50/50 chance of aquiring teachings and experiences, than automatically straight away 100 percent. Almost like, the closer the match or interest, the easier it is to learn.
  • Dfpolis
    1.3k
    We learn by abstraction from experience. — Dfpolis

    Hmmmm, yes. I see. I see you’re talking about learning, I’m talking about understanding.
    Mww

    As we have no reason to think that babies understand counting, the understanding of counting found in older children comes to be. The coming to be of understanding is learning, and we can investigate it by seeing how children come to understand counting.

    how do you suppose culturally differentiated systems find a commonality in their respective analysis? What is the same for a child here and now arriving at “5”, and a medieval Roman child arriving at “V”?Mww

    Each child, in any time and culture, has to count four instances before properly applying the fifth count. The cultural invariant is the concept <five>, not what is counted, or the words or signs used to express that concept.

    One reason to believe would be, the world of experience satisfies some prerogatives that belong to a priori truths, re: one doesn’t need the experience of a severe car crash to know a severe car crash can kill him.Mww

    It is quite true that we do not need to have experiences to understand them, but we do need analogous experiences. If we had no experience of cars, it would be difficult to understand the concept of a car crash.

    My point is this: to understand that I am dealing with an instance of supposed a priori knowledge, say <All As are Bs>, I have to recognize that that I have a instance of A before me. That means that my experience has to be able to evoke the concept <A>. But, if the concept <A> can be evoked by experiencing concrete As, I have no reason the think that the <A> must be given a priori. Since this argument works for any instance of knowledge that can be applied to experience, there is no reason to think that anything we can say of experience is given a priori.

    Of course the concept <A> is not the judgement <All As are Bs>. Still, the usual justification for the claim that such a judgement is known a priori is that if one understand the concept <A> then one sees that <B> is somehow "contained" in it. So, if our understanding of <A> is a posteriori, then so is our understanding of <All As are Bs>.

    But general a priori truths have nothing whatsoever to do with experience (hence the standing definition), but are sustained by the principles of universality and necessity, for which experience can never suffice, re: two parallel lines can never enclose a space. I think it’s more significant, not that we do know some truths a priori, but that we can.Mww

    I think that there is a great deal more information packed into our experience of being than you seem to. To my mind, any experience of being is adequate grounds for a transcendental understanding of being -- one that necessarily applies to whatever is. Such an understanding in turn adequately grounds the principles of Identity, Contradiction, and Excluded Middle. (Any "thing" that could violate these principles cannot confirm to our understanding of what a being is.)

    Since, once we have such transcendental principles we know they apply to all reality, they may be thought of as a priori, but as they are grounded in our experiential understanding of being, they are, in the first instance, and ultimately, a posteriori.
  • Dfpolis
    1.3k
    I wanted to add that the reason sensible representations cannot make themselves known is that they are not operation in the intentional theater. Sensible representations are only potentially active at the level of thought and logic. To be actually operative, their latent intelligibility must become actualized, and that requires something already operative in the intentional/logical order.

    Thus, intentional being is ontologically prior to material being.
  • tim wood
    9.3k
    The coming to be of understanding is learningDfpolis
    Which is developmentally conditioned and bounded. Within these limits, agreed. At and beyond, another topic.I mean this as a parenthetic comment.

    and that requires something already operative in the intentional/logical order.
    Thus, intentional being is ontologically prior to material being.
    Dfpolis
    In opposition to Sartre's "existence precedes essence"? But I'm not asking for argument here, either.

    It is "ontologically" I request you briefly define. In particular and more simply, that ontological priority is not to be confused with temporal priority, yes?

    And a challenge - to anyone. I have admired this thread from afar, as to its substance and the exhaustiveness of the posts. It is a pilgrimage in itself, or would be, to read them all. The challenge is to recapitulate for the rest of us in perhaps five sentences or less, the main point(s) of this thread. (Or to treat this challenge contemptuously, which in fact it may deserve!)
  • tim wood
    9.3k
    Since, once we have such transcendental principles we know they apply to all reality, they may be thought of as a priori, but as they are grounded in our experiential understanding of being, they are, in the first instance, and ultimately, a posteriori.Dfpolis

    You're in an area here where, to my way of thinking, precision in thought and expression is all. I read it this way: "Sure, we discover things, and discovery, by the very nature of the word, is an empirical process. One might even think of it as unveiling or revealing. Thus everything that is discovered is first and finally, empirical, i.e., revealed." Thus how I read it.

    As to characterizing the process, no disagreement here and now (maybe somewhere else and later). As to characterizing the thing discovered, no. For your argument to stand, you have to define empiricism idiosyncratically and in a way that itself "proves" your case, in short, begs the question. And at the same time destroys its common meaning.

    I read it - you - as having the a priori being just a case of the a posteriori, a subset, a species. I argue that they're different animals. It is as if you wished to characterize people as apes. In evolutionary terms, yes, but not now. Not without violence to all the terms in use.
  • Dfpolis
    1.3k
    and that requires something already operative in the intentional/logical order.
    Thus, intentional being is ontologically prior to material being. — Dfpolis

    In opposition to Sartre's "existence precedes essence"? But I'm not asking for argument here, either.
    tim wood

    Aquinas took the position on existence long before Sartre was a twinkle.

    I don't see Material and Intentional as on the same order of abstraction as Essence and Existence. So, no such opposition was intended.

    It is "ontologically" I request you briefly define. In particular and more simply, that ontological priority is not to be confused with temporal priority, yes?tim wood

    Exactly. A is ontologically prior to B is the actuality/operationality of B requires that of A, but not the reverse.

    The challenge is to recapitulate for the rest of us in perhaps five sentences or less, the main point(s) of this thread. (Or to treat this challenge contemptuously, which in fact it may deserve!)tim wood

    Thank you for your appreciation. It would be unfair of me to summarize the careful reflections of others, especially in those cases where I see things differently.
  • Dfpolis
    1.3k
    Thus everything that is discovered is first and finally, empirical, i.e., revealed." Thus how I read it.tim wood

    Yes, but I was also trying to say how insights based on the nature of being may appear to be a priori.

    For your argument to stand, you have to define empiricism idiosyncratically and in a way that itself "proves" your case, in short, begs the question. And at the same time destroys its common meaning.tim wood

    I was not trying to define "empiricism" at all. I am happy to admit it has many flavors. I was talking about "experience" -- about the world as it interacts with us and so reveals itself to us. So, I am unsure where you see question begging. Could you please explain?

    I read it - you - as having the a priori being just a case of the a posteriori, a subset, a species. I argue that they're different animals. It is as if you wished to characterize people as apes. In evolutionary terms, yes, but not now. Not without violence to all the terms in use.tim wood

    I really do not follow this. Could you expand?

    Let me say what I mean. Whenever we experience anything, we experience being -- something that can act to effect the experience we are having. We usually don't strip out all of the specifics to arrive at existence as the unspecified power to act; nonetheless, it is there, at the corner of awareness, ready to be examined and reflected upon if we choose to do so. So, there is a concept of <being> hovering in the background, and when we reflect on principles such as Identity or Excluded Middle, it is there to help us judge them.
  • Mww
    4.9k
    The cultural invariant is the concept <five>, not what is countedDfpolis

    Agreed. Which merely begs the question......from where did sure cultural invariant arise? It must be a condition of all similarly constituted rationalities, n’est pas? All that is counted, and the labels assigned to each unit of substance in the series of counting are immediately dismissed. What is left, both necessarily and sufficiently enabling a thoroughly mental exercise? It is nothing but the pure, a priori concepts, thought by the understanding alone, rising from the constitution of the mind**, the categories of quantity (plurality), quality (reality), relation (causality) and modality (existence). Without these, in conjunction with phenomena in general, no understanding is possible at all, which means.......no counting.
    ** Hey....this is a philosophy forum. Cognitive neuroscience is down the hall on the right, just past the fake rubber tree.

    f we had no experience of cars, it would be difficult to understand the concept of a car crash.Dfpolis

    While such is agreeable superficially, it is also irrelevant, within the context of the topic. Because not all cars are involved in crashes, the concept of car alone is insufficient to justify the truth of the consequent (a guy will die). The synthetic requirement for an outstanding force is also necessary.

    once we have such transcendental principles we know they apply to all reality, they may be thought of as a prioriDfpolis

    That’s what I’m talking about!!!!!! Odd though, you acknowledge that which we know applies to all reality, yet balk at the realization they are the ground of all empirical exercise. Like counting.

    Nonetheless, I would say, they (transcendental principles) are thought a priori, rather than “they may be thought of as a priori”.
  • Dfpolis
    1.3k
    The cultural invariant is the concept <five>, not what is counted — Dfpolis

    Agreed. Which merely begs the question......from where did sure cultural invariant arise? It must be a condition of all similarly constituted rationalities, n’est pas? All that is counted, and the labels assigned to each unit of substance in the series of counting are immediately dismissed. What is left, both necessarily and sufficiently enabling a thoroughly mental exercise? It is nothing but the pure, a priori concepts, thought by the understanding alone, rising from the constitution of the mind**, the categories of quantity (plurality), quality (reality), relation (causality) and modality (existence).
    Mww

    I agree that there is question begging here, but it occurs when you equate, without supporting argument, "the pure" with "a priori" concepts. We know that people did not always count. Counting is an invented and learned skill, which we see transmitted from parent to child to this day. If enumeration were, as you say, "a condition of all similarly constituted rationalities," then there would never be a time when this culture could not count, but that could, and there would be no need to teach counting to children. Yet, there are still anumeric tribes such as the Piraha of the Amazon.

    You may wish to consult Lorraine Boissoneault, "How Humans Invented Numbers -- And How Numbers Reshaped Our World" (https://www.smithsonianmag.com/innovation/how-humans-invented-numbersand-how-numbers-reshaped-our-world-180962485/) or Caleb Everett, Numbers and the Making Of Us. In the later you can read of the Piraha and other anumeric tribes.

    So, the anthropological facts do not conform to your theoretical claims.

    the concept of car alone is insufficient to justify the truth of the consequent (a guy will die). The synthetic requirement for an outstanding force is also necessary.Mww

    Yes, we need to have culturally shared experiences to appreciate danger. We do not give adequate weight to merely possible risks. I don't see how this helps you case.

    That’s what I’m talking about!!!!!! Odd though, you acknowledge that which we know applies to all reality, yet balk at the realization they are the ground of all empirical exercise. Like counting.Mww

    Let's be clear. There are two questions here. (1) Where does our knowledge come from? I my claim is that abstraction from experience is adequate to give rise to so-called "a priori" knowledge. (2) What are the conditions for applying such knowledge once acquired? I affirm that there are no restrictions on applying transcendental principles to reality once we know them, and the only restriction for applying mathematical principles is that we are dealing with countable or measurable realities. (Of course the case has to meet the conditions of application of the principle).

    This does not make having a concept of <two> a condition for meeting a husband and wife.

    In many cases how we think of things does not matter.
  • Mww
    4.9k
    There are two questions here.Dfpolis

    I’m OK with both your (1) and (2). Abstraction from experience is adequate for a priori knowledge, but doesn’t address whether any other methodology is possible. I also affirm there are no restrictions on the application on transcendental principles, and dealing with countable or measureable realities by means of mathematical principles. But similarly, such affirmations have nothing to say about the originality of those principles, which is what metaphysics is all about.

    And I’m OK with your “in many cases how we think about things does not matter”. Very seldom if ever, do we examine our reason....the verb, not the noun.....as to its legitimate use. Whether that matters or not depends on what we intend to do about how far astray we find ourselves in thinking about the world of things.
  • Dfpolis
    1.3k
    Abstraction from experience is adequate for a priori knowledge, but doesn’t address whether any other methodology is possibleMww

    Of course other methods are available and useful. I was only taking about how we come to know so-called "a priori" truths.

    Whether that matters or not depends on what we intend to do about how far astray we find ourselves in thinking about the world of things.Mww

    We agree.
  • Joshs
    5.8k
    Searle was not happy with the results of his encounter with Derrida. He likely would have been as unhappy had he been engaged in argument with Heidegger. Reading your posts concerning materialism vs intentionalty brought that to mind. Analytic writers like Searle serve as useful touchstones for me . They represent for me the one side of a cultural dividing line. Their side upholds the proud tradition of a natural science-based metaphysics owing its origin to the Greeks that, in spite of its Kantian transformations, retains a dualist set of presuppositions concerning matter and form, the material and the intentional, the objective and the subjective, the inner and the outer, sense and content, language and meaning, sign and signified, perception and language, affect and cognition, body and mind.
    But what do hermeneutics, philosophical pragmatism , enactive embodied cognitivism, phenomenology , self-organizing autopoietic systems and constructivism have to contribute to the theorizing of physicists? This question is more difficult to answer than exploring what these modes of thinking have to contribute to pseudo-questions like the mind-body problem. Their varied responses to Dennett's proposed solyion to the problem of consciousness is a useful starting point.
  • Dfpolis
    1.3k
    Thank you for your comments.

    My background, while fairly broad, is quite limited with respect to contemporary European philosophy. I do think that each projection of reality has the potential for illuminating aspects missed by other projections. Please feel free to illuminate any corners you think may have been missed.
  • Mww
    4.9k
    You seem the type to instruct the uninitiated, so......

    The problem is that consciousness is not at all emergent in the sense in which viscosity and surface tension are.Dfpolis

    No, but if viscosity and surface tension prove emergence itself is possible, and with the admitted lack of complete understanding of neurophysiology, neuroplasticity, must the possibility of consciousness emerging from mere neural complexity, in principle, be granted?

    .......so-called "a priori" truths.Dfpolis

    Interesting. Why would you qualify some truths as so-called “a priori”? Are you thinking the term is mis-used? It’s value mis-applied? The whole schema doubtful?

    What do you mean by transcendental principle, and what is an example of one?

    I think that there is a great deal more information packed into our experience of being than you seem to.Dfpolis

    What is meant by “our experience of being”, and what additional/supplemental information could be packed into my own personal experience of being, that isn’t already there?

    Just trying to get a different perspective.
  • Dfpolis
    1.3k
    The problem is that consciousness is not at all emergent in the sense in which viscosity and surface tension are. — Dfpolis

    No, but if viscosity and surface tension prove emergence itself is possible, and with the admitted lack of complete understanding of neurophysiology, neuroplasticity, must the possibility of consciousness emerging from mere neural complexity, in principle, be granted?
    Mww

    As I tried to explain in my last response, the kind of "emergence" viscosity and surface tension and surface tension illustrate is not the kind of "emergence" one finds in the neural complexity claim. In the first case, specific mechanisms are used to derive macro-properties from micro-properties. What emerges is not the macro-properties, which are co-occurrant, but our understanding of the relation of the properties. In the second case, no such understanding is proposed and none emerges.

    Indeed, it is hard to parse out any precise meaning for "emergent" in the second case. Instead, it seems to voice an ill-defined attachment to materialism or physicalism. It is not a causal claim. We know what causal claims look like, and there is none of the usual reasoning offered in support of causal claims. It is not a claim of analytic reduction, for we see no argument that "consciousness" names or hides some species of complexity. It is not even a claim that a certain class of phenomena will invariable lead to a second class of phenomena, for subjectivity is not a phenomenon, but the awareness of phenomena.

    Like many unscientific claims, it is also unfalsifiable -- and on many counts. Not only is there so much wiggle room in the idea of "complexity" that any falsifying observation can be thrown out on the basis of being "insufficiently" complex, or the "wrong kind" of complexity (for the right kind is undefined), but we also have to deal with the fact that since consciousness is not intersubjectively observable we have no idea what kind of phenomena to examine to confirm or falsify the claim.

    Since this kind of "emergence" is ill-defined, so is the possibility of its being instantiated.

    Interesting. Why would you qualify some truths as so-called “a priori”? Are you thinking the term is mis-used? It’s value mis-applied? The whole schema doubtful?Mww

    As I have explained, some truths are "a priori" in the sense of being transcendentally true, and so not dependent on contingent conditions. That does not mean that we come to know them independently of having experienced being.

    What do you mean by transcendental principle, and what is an example of one?Mww

    I use "transcendental" in the sense of applying to all reality. The principles of being (Identity, Contradiction and Excluded Middle) are transcendental in scope. There is also a transcendental relation between essence and existence, so that whatever is, is intrinsically well-specified.

    What is meant by “our experience of being”, and what additional/supplemental information could be packed into my own personal experience of being, that isn’t already there?Mww

    I appreciate trying to give a different perspective.

    Whenever we experience anything, we are necessarily experiencing being. That does not mean that we appreciate the metaphysical implications of what we experience. Mostly, we don't abstract away information of more immediate interest, so we rarely consider being as being, instead of, say, as a well-prepared meal. Sill, being is always there, waiting to be reflected upon.

    So, there is nothing that is not "already there," but there is a lot that is not seen. As the being we experience has no intrinsic necessity, how is it that it assumes necessity as it becomes part of the past? What is the source of this necessity? Or, looking forward, how can being that is merely potential now become actual? Since it is not yet operational, it cannot actualize its own potential.
  • tim wood
    9.3k
    For your argument to stand, you have to define empiricism idiosyncratically and in a way that itself "proves" your case, in short, begs the question. And at the same time destroys its common meaning.
    — tim wood

    I was not trying to define "empiricism" at all. I am happy to admit it has many flavors. I was talking about "experience" -- about the world as it interacts with us and so reveals itself to us. So, I am unsure where you see question begging. Could you please explain?
    Dfpolis

    This is what I was replying to (copied immediately below), and arguing against; in particular the conclusion, "and ultimately, a posteriori." This in effect re-grounds the knowledge of the thing into the process of the discovery of the thing, while demolishing the status of the knowledge as knowledge. Call it, perhaps harshly, a failure to distinguish between what is discovered as discovery, and what is known as a result of that discovery. To know that gold is a yellow metal, that water is H2O, that a bachelor is an unmarried man, arguably first requires discovery/definition/naming. Once done, and the thing known/defined/named, never again need it be discovered: we know it. The fact of these things is in any case unaffected either by the discovery or the knowledge gained thereby. All that has happened is that we've input something into us, the discovery, and refiled it from discovery to knowledge. This step is just the transition from experience, as you put it, or a posteriori, to a priori. Does that imply that all knowledge is a priori? I answer yes, with respect to the criteria that establishes the knowledge as knowledge. Kant's criteria seem stiffer and more restrictive: non-violation of the law of non-contradiction, and universality and necessity. Details (and me) aside, Kant argues for something he calls a priori judgments; you against. So far your argument is a claim. But I do not find that you have argued it in substantive terms.

    Since, once we have such transcendental principles we know they apply to all reality, they may be thought of as a priori, but as they are grounded in our experiential understanding of being, they are, in the first instance, and ultimately, a posteriori.Dfpolis

    -----------------------

    I read it - you - as having the a priori being just a case of the a posteriori, a subset, a species. I argue that they're different animals. It is as if you wished to characterize people as apes. In evolutionary terms, yes, but not now. Not without violence to all the terms in use.
    — tim wood

    I really do not follow this. Could you expand?

    Let me say what I mean. Whenever we experience anything, we experience being -- something that can act to effect the experience we are having. We usually don't strip out all of the specifics to arrive at existence as the unspecified power to act; nonetheless, it is there, at the corner of awareness, ready to be examined and reflected upon if we choose to do so. So, there is a concept of <being> hovering in the background, and when we reflect on principles such as Identity or Excluded Middle, it is there to help us judge them.

    Is referencing <being> a flight to being, or an explication of experience/phenomena? The idea that <being> (I like that notation for the idea enclosed therein) hovers is very Heidegger. I'll go so far as to step into this puddle without first ascertaining how deep it is, viz., that Heidegger concluded that indeed <being> hovers, and that's as close as we'll get to it! That is, Heidegger, here, is not a life-ring to keep you afloat. You have <being> as "something that can act." (I note too you have <being> that we experience, and "a concept of <being>... there to help us.") How does it act? Would it both simplify and demystify to rebrand this <being> as just a capacity of the human mind?
  • Dfpolis
    1.3k
    I've read you post several times and am at a loss. As I intend "ultimately a posteriori," it means that the principles in question are learned from experience, and are not known innately. I did not see you objecting to this, so I still don't know what you think we are disagreeing about.

    This in effect re-grounds the knowledge of the thing into the process of the discovery of the thing, while demolishing the status of the knowledge as knowledge.tim wood

    I do not see how grounding knowledge in experience can involve anything retrograde -- anything that can be called "re-grounding." Nor do I see how anything that has origins can be "demolished" by giving an account of its origins. I can only conclude that some turn of phrase has connotations for you that it does not have for me.

    Once done, and the thing known/defined/named, never again need it be discovered: we know it.tim wood

    Have I implied otherwise? How?

    Does that imply that all knowledge is a priori? I answer yes, with respect to the criteria that establishes the knowledge as knowledge.tim wood

    No, it does not, because some knowledge is purely contingent, and has no a priori component. That Charles Dickens was born February 7, 1812 will never be an a priori fact however well known it is. If we know something in light of general principles, not contingent on the case at hand -- as we know that if I have two apples and am given two more apples, I will have four apples -- then we may be said to know it "a priori" even though we learn arithmetic from experience. But, if the very reason that we know something, as Dickens' birth date, is contingent, then there is nothing a priori about it, however long we may have known it.

    Still, if this is not how you wish to use the terms "a priori" and "a posteriori," that is your choice and not a matter that can be settled by argument.

    So far your argument is a claim. But I do not find that you have argued it in substantive terms.tim wood

    It seems to me that no such argument is called for. We see how children learn, say, arithmetic. We give them different hings to count until they have the flash of insight (which is an abstraction) by which they see that the counting process does not depend on what is counted. We see the same kinds of abstractions occurring in other areas -- for example if something is happening, something is acting to make it happen -- and so, in a higher-order abstraction, we see that the principles of abstract sciences are grasped by abstraction. In light of this, it seems to me that the burden is on the camp of innate knowledge to show that such abstraction is impossible.

    Is referencing <being> a flight to being, or an explication of experience/phenomena?tim wood

    I an not sure what "a flight to being" would mean. We are immersed in being, we can't fly from it or to it.

    I would see it more as a penetration of experience -- drilling down to its transcendental core. One might think of seeing the forest instead of the trees.

    You have <being> as "something that can act." (I note too you have <being> that we experience, and "a concept of <being>... there to help us.") How does it act? Would it both simplify and demystify to rebrand this <being> as just a capacity of the human mind?tim wood

    I got the explication of being as anything that can act or be acted upon from Plato in the Sophist. I believe it is F. M. Cornford who remarked that Plato sees this as sign/mark of being rather than a definition of being. In any event, we can drop the passive part because if we are acting on a putative being, and it does not re-act in some way, then no matter how much we are exerting our self, we are not acting on it at all.

    I would see the power to act not as a definition, or as a mark, of being, but as convertible with being. It prevents us from mistaking being with passive persistence, but it does not define being because there is no more definition of "to act" than there is of "to be." It does help us clarify the distinction of essence as a specification of possible acts and existence as the indeterminate capacity to act.

    If we reduce being to a capacity of the human mind, then we have made the ultimate anthropomorphic error and are on the slippery slope to solipsism. Also, we have fundamentally misunderstood mind, which is at one pole of the subject-object relation of knowing.

    Nor is being is well-understood as the other (objective) pole. Being is only known to the extent that it has revealed itself to us by deigning to interact with us -- to include us in its game, as it were.
  • SteveKlinko
    395
    It is quite common to believe that intentional realities, as found in conscious thought, are fundamentally material -- able to be explained in terms of neurophysiological data processing. This belief has presented metaphysical naturalists with what David Chalmers has called "the Hard Problem." It seems to me that the Hard Problem is a chimera induced by a provably irrational belief.Dfpolis
    So I'm expecting that you are going to show how the Hard Problem goes away. Ill read on.

    By way of background, I take consciousness to be awareness of present, typically neurophysiologically encoded, intelligibility. I see qualia as of minor interest, being merely the contingent forms of awareness.Dfpolis
    I think you are missing an important aspect of Consciousness by dismissing the experience of Qualia as you do. What is that Redness that you experience when you look at a Red object or when you Dream about a Red Object?

    I am not a dualist. I hold that human beings are fully natural unities, but that we can, via abstraction, separate various notes of intelligibility found in unified substances. Such separation is mental, not based on ontological separation. As a result, we can maintain a two-subsystem theory of mind without resort to ontological dualism.Dfpolis
    Sounds like you are saying that there are two separate subsystems of the Material Mind (the Neurons). One is the Computational Machine sub system that is not Conscious and the other sub system is the Conscious aspect where Intentional Reality exists. Another way of saying this is that it is all in the Neurons. But this is still perpetuating the Belief that you criticized above. But then you say:

    Here are the reasons I see intentional reality as irreducible to material reality.Dfpolis
    This sounds like you are saying that you are going to show that Intentional Reality cannot be found in the Neurons. So then where is it? What is it? Sound like Ontological Dualism to me.

    1. Neurophysiological data processing cannot be the explanatory invariant of our awareness of contents. If A => B, then every case of A entails a case of B. So, if there is any case of neurophysiological data processing which does not result in awareness of the processed data (consciousness) then neurophysiological data processing alone cannot explain awareness. Clearly, we are not aware of all the data we process.Dfpolis
    You are just assuming that Neural Activity must imply Conscious Activity in all cases. This does not have to be true even if the Conscious Activity really is all in the Neurons. We don't know enough about Conscious Activity to make sweeping conclusions like this about anything.

    2. All knowledge is a subject-object relation. There is always a knowing subject and a known object. At the beginning of natural science, we abstract the object from the subject -- we choose to attend to physical objects to the exclusion of the mental acts by which the subject knows those objects. In natural science care what Ptolemy, Brahe, Galileo, and Hubble saw, not the act by which the intelligibility of what they saw became actually known. Thus, natural science is, by design, bereft of data and concepts relating to the knowing subject and her acts of awareness. Lacking these data and concepts, it has no way of connecting what it does know of the physical world, including neurophysiology, to the act of awareness. Thus it is logically impossible for natural science, as limited by its Fundamental Abstraction, to explain the act of awareness. Forgetting this is a prime example of Whitehead's Fallacy of Misplaced Concreteness (thinking what exists only in abstraction is the concrete reality in its fullness).Dfpolis
    Yes but this seems to imply that the Conscious Activity of Intention can not be found in the Neurons by Science yet. This implies that Conscious Activity must be some other kind of thing that is not in the Neurons. Sounds like Ontological Dualism to me.

    3. The material and intentional aspects of reality are logically orthogonal. That is to say, that, though they co-occur and interact, they do not share essential, defining notes. Matter is essentially extended and changeable. It is what it is because of intrinsic characteristics. As extended, matter has parts outside of parts, and so is measurable. As changeable, the same matter can take on different forms. As defined by intrinsic characteristics, we need not look beyond a sample to understand its nature.Dfpolis
    I like the Orthogonal Mathematics metaphor. In mathematics when Vectors are Orthogonal you cannot project one onto the other. You cannot project the Intentional Vector onto the Material Vector.

    Intentions do not have these characteristics. They are unextended, having no parts outside of parts. Instead they are indivisible unities. Further, there is no objective means of measuring them. They are not changeable. If you change your intent, you no longer have the same intention, but a different intention. As Franz Brentano noted, an essential characteristic of intentionality is its aboutness, which is to to say that they involve some target that they are about. We do not just know, will or hope, we know, will and hope something. Thus, to fully understand/specify an intention we have to go beyond its intrinsic nature, and say what it is about. (To specify a desire, we have to say what is desired.) This is clearly different from what is needed to specify a sample of matter.Dfpolis
    I'll continue to think about this one.

    4. Intentional realities are information based. What we know, will, desire, etc. is specified by actual, not potential, information. By definition, information is the reduction of (logical) possibility. If a message is transmitted, but not yet fully received, then it is not physical possibility that is reduced in the course of its reception, but logical possibility. As each bit is received, the logical possibility that it could be other than it is, is reduced.

    The explanatory invariant of information is not physical. The same information can be encoded in a panoply of physical forms that have only increased in number with the advance of technology. Thus, information is not physically invariant. So, we have to look beyond physicality to understand information, and so the intentional realities that are essentially dependent on information
    Dfpolis
    This is all well and good if Intention actually is Information. Maybe. I'll continue to think about this too.

    I did not see any solution to the Hard Problem in all this. If Intentional Realities are not reducible to the Material Neurons then what are Intentional Realities? Where are Intentional Realities? How can this be Explained? There is a big Explanatory Gap here. This Explanatory Gap is the Chalmers Hard Problem.
  • Dfpolis
    1.3k
    So I'm expecting that you are going to show how the Hard Problem goes away.SteveKlinko

    No, my claim is that the Hard Problem is a chimera based on the fallacious assumption that intentional reality can be reduced to a material phenomenon. It's solution is to realize that there is no solution, because it is not a real problem.

    I think you are missing an important aspect of Consciousness by dismissing the experience of Qualia as you do. What is that Redness that you experience when you look at a Red object or when you Dream about a Red Object?SteveKlinko

    I am not dismissing qualia. They are quite real. I am saying that they are not essential to being conscious as they only occur in some cases. For example, we know abstract intelligibility without being aware of qualia. For example, what quale is associated with knowing that the rational numbers are countably infinite or that the real numbers are uncountably infinite? So, I see qualia as real, but not essential to being conscious.

    Sounds like you are saying that there are two separate subsystems of the Material Mind (the Neurons).SteveKlinko

    No, I am saying that there is a material data processing subsystem composed of neurons, glia, and neurotransmitters, but that it cannot account for the intentional aspects of mind, so we also need an additional, immaterial, subsystem to account for intentional operations.

    This sounds like you are saying that you are going to show that Intentional Reality cannot be found in the Neurons.SteveKlinko

    As intentional realities are immaterial, it is a category error to think of them as having a location, as being "in" something. You can think of immaterial realities being where they act, but that does not confine them to a single location.

    Sound like Ontological Dualism to me.SteveKlinko

    Saying that the material operations of mind are not the intentional operations of mind is no more dualistic than saying that the sphericity of a ball is not its material. There is one substance (ostensible unity) -- the ball or the person -- but we can mentally distinguish different aspects of that substance. As it is foolish to think that we can reduce the sphericity of the ball to its being rubber, so it it is foolish to think we can reduce the intentional operations of a person to being material. They are simply different ways of thinking of one and the same thing.

    You are just assuming that Neural Activity must imply Conscious Activity in all cases.SteveKlinko

    No, I am not. I am saying that if A explains B, then every case of A implies a case of B. If we find a counterexample, then by the modus tollens, A does not A explain B. This leaves open the possibility that A plus something else might explain B. Still, it takes more than A to explain B. I am suggesting that the "something else" is an intentional subsystem.

    Yes but this seems to imply that the Conscious Activity of Intention can not be found in the Neurons by Science yet.SteveKlinko

    No. It implies that that as long as it begins with the Fundamental Abstraction, natural science is unequipped to deal with intentional operations.

    I did not see any solution to the Hard Problem in all this. If Intentional Realities are not reducible to the Material Neurons then what are Intentional Realities? Where are Intentional Realities? How can this be Explained? There is a big Explanatory Gap here. This Explanatory Gap is the Chalmers Hard Problem.SteveKlinko

    There is only a gap if you assume that intentional operations can be reduced to physical operations. If you do not make this assumption, there is no gap to bridge.

    We know, from experience that humans can perform both physical and intentional operations. That is a datum, a given, in the same way that protons can engage in strong, electromagnetic, weak and gravitational interactions is a given. Knowing these things does not mean that we can or should be seeking ways of reducing one to the other. It does mean that we should seek to understand how these kinds of interactions relate to each other. To do that requires that we employ the best methods to investigate them separately and in combination.
  • SteveKlinko
    395
    I think you are saying that there is only an Explanatory Gap if the Intentional Reality is found to be in the Neurons. But if it is found to be in the Neurons then that means that Science has an Explanation for How and Why it is in the Neurons. There would be no Explanatory Gap here and the Hard Problem would be solved. If Intentional Reality is not found in the Neurons then there would exist a Huge Explanatory Gap as to what it could be. How does this non-Material Intention ultimately interact with the Neurons, as it must, to produce Intentional or Volitional effects? It seems to me the Explanatory Gap is in the opposite situation from what you have stated. Am I correct in saying that Volition is the same as Intention in your analysis?
  • Dfpolis
    1.3k
    I think you are saying that there is only an Explanatory Gap if the Intentional Reality is found to be in the NeuronsSteveKlinko

    I am not sure what, operationally, it would mean to find intentional reality "in the neurons." If intentions are to be effective, if I am actually able to go to the store because I intend to go to store, then clearly my intentions need to modify the behavior of neurons and are in them in the sense of being operative in them. Yet, for the hard problem to make sense requires more than this, for it assumes that the operation of our neurophysiology is the cause of intentionality. What kind of observation could possibly confirm this?

    But if it is found to be in the Neurons then that means that Science has an Explanation for How and Why it is in the NeuronsSteveKlinko

    Knowing what is, is not the same as knowing how or why it is. We know that electrons have a charge of -1 in natural units. We have no idea of how and why this is so.

    If Intentional Reality is not found in the Neurons then there would exist a Huge Explanatory Gap as to what it could be.SteveKlinko

    Not at all. We already know what intentionality is. We can define it, describe it, and give uncounted examples of it. What we do not know is what we cannot know, i.e. how something that cannot be its cause is its cause. That is no more a gap than not knowing how to trisect an arbitrary angle with a compass and straightedge is a gap in our knowledge of Euclidean geometry. There is no gap if there is no reality to understand.

    . How does this non-Material Intention ultimately interact with the Neurons, as it must, to produce Intentional or Volitional effects?SteveKlinko

    I think the problem here is how you are conceiving the issue. You seem to be thinking of intentional reality as a a quasi-material reality that "interacts" with material reality. It is not a different thing, it is a way of thinking about one thing -- about humans and how humans act. It makes no sense to ask how one kind of human activity "interacts" with being human, for it is simply part of being human.

    I have argued elsewhere on this forum and in my paper (https://www.academia.edu/27797943/Mind_or_Randomness_in_Evolution), that the laws of nature are intentional. The laws of nature are not a thing separate from the material states they act to transform. Rather, both are aspects of nature that we are able to distinguish mentally and so discuss in abstraction from each other. That we discuss them independently does not mean that they exist, or can exist, separately.

    Would it make any sense to ask how the laws of nature (which are intentional), "interact" with material states? No, that would be a category error, for the laws of nature are simply how material states act and it makes no sense to ask how a state acts "interacts" with the state acting. In the same way it makes no sense to ask how an effective intention, how my commitment to go to the store, interacts with my going to the store -- it is simply a mentally distinguishable aspect of my going to the store.

    I know this does not sound very satisfactory. So, think of it this way. If I have not decided to go to the store, my neurophysiology obeys certain laws of nature. Once I commit to going, it can no longer be obeying laws that will not get me to the store, so it must be obeying slightly different laws -- laws that are modified by my intentions. So, my committed intentions must modify the laws controlling my neurophysiology. That is how they act to get me to the store.

    Am I correct in saying that Volition is the same as Intention in your analysis?SteveKlinko

    Volition produces what I am calling "committed intentions." There are many other kinds of intentions like knowing, hoping, believing, etc.
  • SteveKlinko
    395
    I think you are saying that there is only an Explanatory Gap if the Intentional Reality is found to be in the Neurons — SteveKlinko
    I am not sure what, operationally, it would mean to find intentional reality "in the neurons." If intentions are to be effective, if I am actually able to go to the store because I intend to go to store, then clearly my intentions need to modify the behavior of neurons and are in them in the sense of being operative in them. Yet, for the hard problem to make sense requires more than this, for it assumes that the operation of our neurophysiology is the cause of intentionality. What kind of observation could possibly confirm this?
    Dfpolis
    I don't think there is any experimental test for this.

    But if it is found to be in the Neurons then that means that Science has an Explanation for How and Why it is in the Neurons — SteveKlinko
    Knowing what is, is not the same as knowing how or why it is. We know that electrons have a charge of -1 in natural units. We have no idea of how and why this is so.
    Dfpolis
    I'm missing your point here because I said that Science will need to have the Explanation for the How and Why, and not merely the fact that it is.

    If Intentional Reality is not found in the Neurons then there would exist a Huge Explanatory Gap as to what it could be. — SteveKlinko
    Not at all. We already know what intentionality is. We can define it, describe it, and give uncounted examples of it. What we do not know is what we cannot know, i.e. how something that cannot be its cause is its cause. That is no more a gap than not knowing how to trisect an arbitrary angle with a compass and straightedge is a gap in our knowledge of Euclidean geometry. There is no gap if there is no reality to understand.
    Dfpolis
    I disagree that we know anything about what Intentionality is. We know we have it, but what really is it? This is similar to how we Experience the Redness of Red. We certainly know that we have the Experience but we have no idea what it is.

    . How does this non-Material Intention ultimately interact with the Neurons, as it must, to produce Intentional or Volitional effects? — SteveKlinko
    I think the problem here is how you are conceiving the issue. You seem to be thinking of intentional reality as a a quasi-material reality that "interacts" with material reality. It is not a different thing, it is a way of thinking about one thing -- about humans and how humans act. It makes no sense to ask how one kind of human activity "interacts" with being human, for it is simply part of being human.
    Dfpolis
    If you have an intention to do something then that intention must ultimately be turned into a Volitional command to the Brain that will lead to the firing of Neurons that will activate the muscles of the Physical Body to do something. I believe you called that a Committed Intention.

    I have argued elsewhere on this forum and in my paper (https://www.academia.edu/27797943/Mind_or_Randomness_in_Evolution), that the laws of nature are intentional. The laws of nature are not a thing separate from the material states they act to transform. Rather, both are aspects of nature that we are able to distinguish mentally and so discuss in abstraction from each other. That we discuss them independently does not mean that they exist, or can exist, separately.

    Would it make any sense to ask how the laws of nature (which are intentional), "interact" with material states? No, that would be a category error, for the laws of nature are simply how material states act and it makes no sense to ask how a state acts "interacts" with the state acting. In the same way it makes no sense to ask how an effective intention, how my commitment to go to the store, interacts with my going to the store -- it is simply a mentally distinguishable aspect of my going to the store.
    Dfpolis
    When you say the Laws of Nature are Intentional, it sounds like you are talking about some kind of Intelligent Design. I'm not sure how this is even relevant to the discussion.

    I know this does not sound very satisfactory. So, think of it this way. If I have not decided to go to the store, my neurophysiology obeys certain laws of nature. Once I commit to going, it can no longer be obeying laws that will not get me to the store, so it must be obeying slightly different laws -- laws that are modified by my intentions. So, my committed intentions must modify the laws controlling my neurophysiology. That is how they act to get me to the store.

    Am I correct in saying that Volition is the same as Intention in your analysis? — SteveKlinko
    Volition produces what I am calling "committed intentions." There are many other kinds of intentions like knowing, hoping, believing, etc.
    Dfpolis
    I had been thinking that you actually were using the word Intentions to mean Committed Intentions or in my way of thinking: Volition. I'm not sure what to do with an abstract concept like Intentions. When you hang your argument for eliminating the Hard Problem on an abstract Intentions concept being Material you are setting up a straw man.

    This is why I like to frame the Hard Problem in terms of a sensory perception like the Experience of the Redness of Red. The Redness experience cannot be found in the Material Brain. We know that there are Neural Correlates of Consciousness for the Redness experience but we don't know what the Redness experience itself actually could be. It cannot be found in the Neurons in the Brain at this point in the Scientific understanding of the Brain. There is a Huge Explanatory Gap here as to what is that Redness experience. Even if your Intention argument is true, this Redness Experience Explanatory Gap must be solved. This is what the Hard Problem is really all about.
  • Dfpolis
    1.3k
    I'm missing your point here because I said that Science will need to have the Explanation for the How and Why, and not merely the fact that it is.SteveKlinko

    OK. I misunderstood what you were saying. To me there is data, and the data might show that there is intentionality in the neurons, and there is theory, which would explain the data in terms of how and why. But, you agree that there is no experimental test for finding intentionality in neurons, so, there can be no data to explain. That leaves us with the question: What kind of evidentiary support can there be for a theory that supposedly explains something that cannot be observed? If this theory predicts that some set of physical circumstances will produce intentionality in neurons, and we cannot observe intentionality in neurons, doesn't that make the theory unfalsifiable, and so unscientific? In short, I have difficulty in seeing how such a theory can be part of science.

    I disagree that we know anything about what Intentionality is. We know we have it, but what really is it? This is similar to how we Experience the Redness of Red. We certainly know that we have the Experience but we have no idea what it is.SteveKlinko

    If you mean that we cannot reduce these things to a physical basis, that is the very point I am making. But that is not the same as not knowing what a thing is. If we can define intentionality well enough for other people to recognize it when they encounter it, we know what it is.

    I think you need to ask yourself what you mean by knowing "what a thing is?" What things are is fully defined by what they can do. If we know what things can do -- how they scatter light, interact with other objects, and so on -- we know all there is to know about what they are. We pretty much know what various kinds of intentions do. So, in what way do we not know what they are?

    If you have an intention to do something then that intention must ultimately be turned into a Volitional command to the Brain that will lead to the firing of Neurons that will activate the muscles of the Physical Body to do something. I believe you called that a Committed Intention.SteveKlinko

    Agreed. And that means that committed intentions must modify the laws that control how our neurophysiology works. How else could they do what they do?

    When you say the Laws of Nature are Intentional, it sounds like you are talking about some kind of Intelligent Design. I'm not sure how this is even relevant to the discussion.SteveKlinko

    I am not an advocate of Intelligent Design. I think it gravely misunderstands the laws of nature. ID assumes that God is not intelligent enough to create a cosmos that effects His ends without recurrent diddling. That is insulting to God.

    The arguments I give in my paper for the laws of nature being intentional are based solely on our empirical knowledge, and do not assume the existence of an intending God. The relevance here of the laws being intentional is that they are in the same theater of operations as human commitments. Since they are in the same theater of operation, our commitments can affect the general laws, perturbing them to effect our ends. Material operations, on the other hand, are not in the same theater of operation and so cannot affect the laws of nature.

    When you hang your argument for eliminating the Hard Problem on an abstract Intentions concept being Material you are setting up a straw man.SteveKlinko

    This seems confused. First, I an not saying intentions are material. Second, the Hard Problem is about the production of consciousness (of intellect) and not, in the first instance about volition (will).

    We have no intentions without consciousness, which is awareness of present intelligibility. It makes what was merely intelligible actually known. The brain can process data in amazing ways, but processing data does not raise data from being merely intelligible to being actually known. To make what is intelligible actually known requires a power that is not merely potential, but operational. So, nothing that is merely intelligible, that is only potentially an intention, can produce an intention. Thus, data encoded in the brain cannot make itself actually known -- it cannot produce consciousness.

    What is already operational in the intentional theater is awareness -- what Aristotle called the agent intellect. It is when we turn our awareness to present intelligibility that the neurally encoded contents become known. So, while the brain can produce the contents of awareness, it cannot produce awareness of those contents.

    Even if your Intention argument is true, this Redness Experience Explanatory Gap must be solved. This is what the Hard Problem is really all about.SteveKlinko

    If that were so, then every instance of consciousness, even the most abstract, would involve some quale. It does not. So, quale are not an essential aspect of consciousness. On the other hand, there is no instance of consciousness without awareness and some intelligible object. So, the essential features of consciousness are awareness/subjectivity and the the contents of awareness/objectivity.

    Of course there are qualia, but we do know what they are. All qualia are the contingent forms of sensory awareness. We know, for example, that redness is the form of our awareness of certain spectral distributions of light. There is nothing else to know about redness. If you think there is, what would it be?
  • Terrapin Station
    13.8k
    I think you are saying that there is only an Explanatory Gap if the Intentional Reality is found to be in the Neurons.SteveKlinko

    "Explanatory gap" talk is a red herring as long as we continue to not analyze just what is to count as an explanation and why, with a clear set of demarcation criteria for explanations, and where we make sure that we pay attention to the qualitative differences--in general, for all explanations--between what explanations are and the phenomena that they're explaining.
  • SteveKlinko
    395
    I'm missing your point here because I said that Science will need to have the Explanation for the How and Why, and not merely the fact that it is. — SteveKlinko
    OK. I misunderstood what you were saying. To me there is data, and the data might show that there is intentionality in the neurons, and there is theory, which would explain the data in terms of how and why. But, you agree that there is no experimental test for finding intentionality in neurons, so, there can be no data to explain. That leaves us with the question: What kind of evidentiary support can there be for a theory that supposedly explains something that cannot be observed? If this theory predicts that some set of physical circumstances will produce intentionality in neurons, and we cannot observe intentionality in neurons, doesn't that make the theory unfalsifiable, and so unscientific? In short, I have difficulty in seeing how such a theory can be part of science.
    Dfpolis
    That's how bad our understanding of Consciousness is. We can't even conceive that there could be a Scientific explanation for it. But I think there probably is a Scientific explanation. We just need some smart Mind to figure it out someday in the future.

    I disagree that we know anything about what Intentionality is. We know we have it, but what really is it? This is similar to how we Experience the Redness of Red. We certainly know that we have the Experience but we have no idea what it is. — SteveKlinko
    If you mean that we cannot reduce these things to a physical basis, that is the very point I am making. But that is not the same as not knowing what a thing is. If we can define intentionality well enough for other people to recognize it when they encounter it, we know what it is.

    I think you need to ask yourself what you mean by knowing "what a thing is?" What things are is fully defined by what they can do. If we know what things can do -- how they scatter light, interact with other objects, and so on -- we know all there is to know about what they are. We pretty much know what various kinds of intentions do. So, in what way do we not know what they are?
    Dfpolis
    We know what they are from our subjective Conscious experience of them. But since we don't know what Consciousness is, in the first place, being Conscious of them is not an explanation.

    If you have an intention to do something then that intention must ultimately be turned into a Volitional command to the Brain that will lead to the firing of Neurons that will activate the muscles of the Physical Body to do something. I believe you called that a Committed Intention. — SteveKlinko
    Agreed. And that means that committed intentions must modify the laws that control how our neurophysiology works. How else could they do what they do?
    Dfpolis

    When you say the Laws of Nature are Intentional, it sounds like you are talking about some kind of Intelligent Design. I'm not sure how this is even relevant to the discussion. — SteveKlinko
    I am not an advocate of Intelligent Design. I think it gravely misunderstands the laws of nature. ID assumes that God is not intelligent enough to create a cosmos that effects His ends without recurrent diddling. That is insulting to God.

    The arguments I give in my paper for the laws of nature being intentional are based solely on our empirical knowledge, and do not assume the existence of an intending God. The relevance here of the laws being intentional is that they are in the same theater of operations as human commitments. Since they are in the same theater of operation, our commitments can affect the general laws, perturbing them to effect our ends. Material operations, on the other hand, are not in the same theater of operation and so cannot affect the laws of nature.
    Dfpolis
    I guess you are making a distinction now between Laws of Nature that apply to Intentional Phenomenon and Laws of Nature that apply to Material Phenomenon. So you should not say the Laws of Nature are Intentional but only a subset of the Laws of Nature that apply to Intentionality are Intentional.

    When you hang your argument for eliminating the Hard Problem on an abstract Intentions concept being Material you are setting up a straw man. — SteveKlinko
    This seems confused. First, I an not saying intentions are material. Second, the Hard Problem is about the production of consciousness (of intellect) and not, in the first instance about volition (will).

    We have no intentions without consciousness, which is awareness of present intelligibility. It makes what was merely intelligible actually known. The brain can process data in amazing ways, but processing data does not raise data from being merely intelligible to being actually known. To make what is intelligible actually known requires a power that is not merely potential, but operational. So, nothing that is merely intelligible, that is only potentially an intention, can produce an intention. Thus, data encoded in the brain cannot make itself actually known -- it cannot produce consciousness.

    What is already operational in the intentional theater is awareness -- what Aristotle called the agent intellect. It is when we turn our awareness to present intelligibility that the neurally encoded contents become known. So, while the brain can produce the contents of awareness, it cannot produce awareness of those contents.
    Dfpolis
    I don't think the Brain is the Consciousness aspect. But rather I think the Brain connects to a Consciousness aspect.

    Even if your Intention argument is true, this Redness Experience Explanatory Gap must be solved. This is what the Hard Problem is really all about. — SteveKlinko
    If that were so, then every instance of consciousness, even the most abstract, would involve some quale. It does not. So, quale are not an essential aspect of consciousness. On the other hand, there is no instance of consciousness without awareness and some intelligible object. So, the essential features of consciousness are awareness/subjectivity and the the contents of awareness/objectivity.

    Of course there are qualia, but we do know what they are. All qualia are the contingent forms of sensory awareness. We know, for example, that redness is the form of our awareness of certain spectral distributions of light. There is nothing else to know about redness. If you think there is, what would it be?
    Dfpolis
    I think every instance of Consciousness actually does involve some sort of Quale. Things that are sub Conscious of course do not involve any Qualia. Even the sense of Awareness itself has a certain feel to it. The experience of Understanding itself has a feel to it. There are all kinds of Qualia besides sensory Qualia.
  • SteveKlinko
    395
    I think you are saying that there is only an Explanatory Gap if the Intentional Reality is found to be in the Neurons. — SteveKlinko
    "Explanatory gap" talk is a red herring as long as we continue to not analyze just what is to count as an explanation and why, with a clear set of demarcation criteria for explanations, and where we make sure that we pay attention to the qualitative differences--in general, for all explanations--between what explanations are and the phenomena that they're explaining
    Terrapin Station
    Since we cannot even begin to understand how to approach the study of Consciousness there is no way we can make a list of all the possible Explanations. There is no clear set of demarcation criteria for Explanations of Consciousness. Everything and anything is possible at this point. In fact it is a Red Herring to demand such a list of possible Explanations. A First Clue is what we need at this point.
  • Dfpolis
    1.3k
    If this theory predicts that some set of physical circumstances will produce intentionality in neurons, and we cannot observe intentionality in neurons, doesn't that make the theory unfalsifiable, and so unscientific? In short, I have difficulty in seeing how such a theory can be part of science. — Dfpolis

    That's how bad our understanding of Consciousness is. We can't even conceive that there could be a Scientific explanation for it. But I think there probably is a Scientific explanation. We just need some smart Mind to figure it out someday in the future.
    SteveKlinko

    This is like responding to Goedel's proof that arithmetic cannot be proven consistent by means formalizable in arithmetic, by saying we have not formalized enough means. What the argument shows is that there can be no falsifiable theory for consciousness in neurons. Our ability to conceive possibilities does not enter the argument, and so is totally irrelevant.

    Nothing here indicates a poor understanding of consciousness. On the contrary, our understanding is deep enough to rule out whole classes of hypotheses. Being able to do that shows that our understanding is quite good -- just not what people with mechanistic prejudices want.

    The appeal to future science is an argument of desperation.

    We pretty much know what various kinds of intentions do. So, in what way do we not know what they are? — Dfpolis

    We know what they are from our subjective Conscious experience of them. But since we don't know what Consciousness is, in the first place, being Conscious of them is not an explanation.
    SteveKlinko

    We do know what consciousness is: It is the capacity to actualize present intelligibility. All we do not "know" is the pipe dream of materialists, viz., how to reduce consciousness to a material basis.

    You continue to confuse the hope of materialists with some unknown reality. Hopes are an inadequate to establish existence.

    I guess you are making a distinction now between Laws of Nature that apply to Intentional Phenomenon and Laws of Nature that apply to Material Phenomenon. So you should not say the Laws of Nature are Intentional but only a subset of the Laws of Nature that apply to Intentionality are Intentional.SteveKlinko

    I am making a distinction between the base, unperturbed laws of nature (Newton's universal laws) and those laws as perturbed by human committed intentions. Perturbations in physics do not change the general character of the base laws, they only cause them to act in a slightly different way in the case under consideration.

    That human intentions really can perturb the laws of nature has been confirmed by hundreds of experiments and is know to be the case beyond a statistical doubt. These experiments and their metanalyses consistently show a small effect (~10E-5 to ~10E-4) with a high statistical certainty (z = 4.1, 18.2, 16.1, 7 in various studies).

    Apparently you have not read the arguments for the intentionality of the laws of nature in my paper. If you do, you will see that they address the base laws studied by physics, unperturbed by human intentions.

    I don't think the Brain is the Consciousness aspect. But rather I think the Brain connects to a Consciousness aspect.SteveKlinko

    Of course it does. The brain processes the information we are aware of. To have an act of consciousness we need two things: an object and a subject, contents (processed by the brain) and awareness of those concepts (provided by the agent intellect).

    I think every instance of Consciousness actually does involve some sort of Quale.SteveKlinko

    What is the quale of being conscious of the fact that the irrational numbers are uncountable? Or that arithmetic cannot be proven to be consistent by means formalizable in arithmetic?

    We may think of the sound of words in thinking these things, but those sounds are not the quale of what is known, because we can think the same propositions in French, German or Greek. So, there is no fixed relation between the content and the thought sound, as there is a fixed relation between the spectral distribution and the quale of red.

    There are all kinds of Qualia besides sensory Qualia.SteveKlinko

    You may broaden the definition of "quale" to make it apply beyond sensory experience, but that is not how most people use the word. When you broaden the meaning in this way, "quale" becomes indistinguishable from "experience."
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment