• Possibility
    2.8k
    To point in the direction of the mop and say 'it is not that case that there is a Muppet in the mop cupboard' sounds like an example of the problem of counterfactual conditionals. People who are anxious about the metaphysical aspects of realism will argue that there are no negative facts and thus correspondence breaks down. This proposition about the mop cupboard doesn't seem to have any corresponding relation to objects and relation to objects in the world. Or something like that.Tom Storm

    It is when we exclude negative facts from realism that we limit the perception of truth in which we operate. That’s fine, as long as we recognise this when we refer to truth. Counterfactual conditionals are only a problem if we fail to recognise this limited perception of truth.

    The proposition ‘it is not the case that there is a Muppet in the mop cupboard’ is made from a six year old perception of truth, the limitations of which have been isolated from the proposition. A six year old would make a proposition in order to test conceptual knowledge, not to propose truth. A more accurate counterfactual conditional here (pointing in the direction of the mop) would be: ‘if it were not the case that there is a Muppet in the mop cupboard, then that would be a Muppet’. This clarifies the limited perception of truth in which the proposition operates, with correspondence intact.
  • Possibility
    2.8k
    The Problem Of The Criterion has, at its core, the belief that,

    1. To define we must have particular instances (to abstract the essence of that which is being defined)

    2. To identify particular instances we must have a definition

    The Problem Of The Criterion assumes that definitions and particular instances are caught in a vicious circle of the kind we've all encountered - the experience paradox - in which to get a job, we first need experience but to get experience, we first need a job. Since neither can be acquired before the other, it's impossible to get both.

    For The Problem Of The Criterion to mean anything, it must be the relationship between definitions and particular instances be such that in each case the other is precondtion thus closing the circle and trapping us in it.

    However, upon analysis, this needn't be the case. We can define arbitrarily (methodism) as much as non-arbitrarily (particularism) - there's no hard and fast rule that these two make sense only in relation to each other ss The Problem Of The Criterion assumes. I can be a methodist in certain situations or a particularist in others; there's absolutely nothing wrong in either case.
    TheMadFool

    The way I see it, the Problem of the Criterion is not just about defining concepts or identifying instances, but about our accuracy in the relation between definition and identification. The problem is that knowledge is not an acquisition, but a process of refining this accuracy, which relies on identifying insufficient structures in both aspects of the process.

    To use your analogy, the process of refining the relation between getting a job and getting experience relies on each aspect addressing insufficiencies in the other. To solve the problem of circularity, it is necessary to acknowledge this overall insufficiency, and to simply start: either by seeking experience without getting a job (ie. volunteer work, internship, etc) or by seeking a job that requires no experience (entry-level position or unskilled labour).

    In terms of identification and definition, it is necessary to recognise the overall insufficiency in our knowledge, and either start with an arbitrary definition to which we can relate instances in one way or another, or by identifying instances that will inform a definition - knowing that the initial step is far from accurate, but begins a relational process towards accuracy.
  • simeonz
    310
    This makes sense to me. Much of what you have written is difficult for me to follow, but I get the sense that we’re roughly on the same page here...?Possibility
    This reminds me of a Blackadder response - "Yes.. And no."

    I’m pointing out a distinction between the linguistic definition of a concept - which is an essentialist and reductionist methodology of naming consolidated features - and an identification of that concept in how one interacts with the world - which is about recognising patterns in qualitative relational structures.Possibility
    I think that according to your above statement, the technical definition of a class does not correlate to immediate sense experience, nor the conception from direct encounters between the subject and the object, nor to the recognition practices of objects in routine life. If that is the claim, I contend that technically exhaustive definitions are just elaborated countours of the same classes, but with level of detail that differs, because it is necessary for professionals that operate with indirect observations of the object. Say, as a software engineer, I think of computers in a certain way, such that I could recognize features of their architecture in some unlabeled schematic. A schematic is not immediate sense experience, but my concept does not apply to just appearances, but logical organization, so the context in which the full extent of my definition will become meaningful is not the perceptual one. For crude recognition of devices by appearances in my daily routine, I match them to the idea using a rough distilled approximation from my concept, drawing on the superficial elements in it, and removing the abstract aspects, which remain underutilized.

    If you are referring just to the process of identification, you are right, that if I see empty computer case, I will at first assume that it is the complete assembly and classify it is a computer. There is no ambiguity as to what a computer is in my mind, even in this scenario, but the evaluation of the particular object is based on insufficient information, and it is made with assumed risk. The unsuccessful application of weighted guesses to fill the missing features turn into an error in judgement. So, this is fuzzyness of the concept matching process, stemming from the lack of awareness, even though the definition is inаppropriate under consideration of the object's complete description.

    Another situation is, that if I am given a primitive device with some very basic cheap electronics in it, I might question if it is a computer. Here the fuzzyness is not curable with more data about the features of the object, because it results from the borderline evaluation of the object by my classifier. Hence, I should recognize that classes are nuances that gradually transition between each other.

    A different case arises when there is disagreement of definitions. If I see a washing machine, I would speculate that it hosts a computer inside (in the sense of electronics having the capacity for universal computation, if not anything else), but an older person or a child might not be used to the idea of embedded electronics and recognize the object as mechanical. That is, I will see computers in more places, because I have a wider definition. The disparity here is linguistic and conceptual, because the child or elderly person make crude first association based on appearances and then the resulting identification is not as good a predictor of the quality of the object they perceive. We don't talk the same language and our underlying concepts differ.

    In the latter case, my definition increases the anticipated range of tools supported by electronics and my view on the subject of computing is more inclusive classifier. The classification outcome predicts the structure and properties of the object, such as less friction, less noise. We ultimately classify the elements of the environment with the same goal in mind, discernment between distinct categories of objects and anticipation of their properties, but the boundaries depend on how much experience we have and how crudely we intend to group the objects.

    So, to summarize. I agree that sometimes the concept is indecisive due to edge cases, but sometimes the fuzzyness is in its application due to incomplete information. This does not change the fact that the academic definition is usually the most clearly ascribed. There is also the issue of linguistic association with concepts, I think that people can develop notions and concepts independently of language and communication, just by observing the correlations between features in their environment, but there are variables there that can sway the process in multiple directions and affect the predictive value of the concept map.
  • creativesoul
    12k
    This makes sense to me. Much of what you have written is difficult for me to follow, but I get the sense that we’re roughly on the same page here...?
    — Possibility
    This reminds me of a Blackadder response - "Yes.. And no."
    simeonz

    :smile:

    Get used to it with Possibility.
  • Kaiser Basileus
    52
    1. Which propositions are true/knowledge? [Instances of truth/knowldege]

    2. How can we tell which propositions are true/knowledge? [Definition of truth/knowledge]

    Knowledge is justified belief. What evidence counts as sufficient justification depends first upon the desired intent. Truth is an individual perspective on reality (consensus experience). This understanding is necessary and sufficient for all related epistemological questions and problems.

    universal taxonomy - evidence by certainty
    0 ignorance (certainty that you don't know)
    1 found anecdote (assumed motive)
    2 adversarial anecdote (presumes inaccurate communication motive)
    3 collaborative anecdote (presumes accurate communication motive)
    4 experience of (possible illusion or delusion)
    5 ground truth (consensus Reality)
    6 occupational reality (verified pragmatism)
    7 professional consensus (context specific expertise, "best practice")
    8 science (rigorous replication)
    -=empirical probability / logical necessity=-
    9 math, logic, Spiritual Math (semantic, absolute)
    10 experience qua experience (you are definitely sensing this)
  • Possibility
    2.8k
    So, to summarize. I agree that sometimes the concept is indecisive due to edge cases, but sometimes the fuzzyness is in its application due to incomplete information. This does not change the fact that the academic definition is usually the most clearly ascribed. There is also the issue of linguistic association with concepts, I think that people can develop notions and concepts independently of language and communication, just by observing the correlations between features in their environment, but there are variables there that can sway the process in multiple directions and affect the predictive value of the concept map.simeonz

    You seem to be arguing for definition of a concept as more important than identification of its instances, but this only reveals a subjective preference for certainty. There are variables that affect the predictive value of the concept map regardless of whether you start with a definition or identified instances. Language and communication is important to periodically consolidate and share the concept map across differentiated conceptual structures - but so that variables in the set of instances are acknowledged and integrated into the concept map.
  • Shawn
    13.3k
    Logical simples solves the issue. Just can't find em.
  • simeonz
    310
    You seem to be arguing for definition of a concept as more important than identification of its instances, but this only reveals a subjective preference for certainty. There are variables that affect the predictive value of the concept map regardless of whether you start with a definition or identified instances.Possibility
    That is true. I rather cockily answered "yes and no". I do partly agree with you. There are many layers to the phenomenon.

    I want to be clear that I don't think that a dog is defined conceptually by the anatomy of the dog, because it is inherently necessary to define objects by their structure. I don't even think that a dog can be defined conceptually exhaustively from knowing all the dogs in the world. It is rather, contrasted with all the cats (very roughly speaking). But eventually, there is some convergence, when the sample collection is so large that we can tell enough about the concept (in contrast to other neighboring concepts) that we don't need to continue its refinement. And that is when we arrive at some quasi-stable technical definition.

    There are many nuances here. Not all people have practical use for the technical definition, since their life's occupation does not demand it and they have no personal interest in it. But I was contending that those who do use the fully articulated concept, will actually stay mentally committed to its full detail, even when they use it crudely in routine actions. Or at least for the most part. They could make intentional exceptions to accommodate conversations. They just wont involve the full extent of their knowledge at the moment. Further, people can disagree on concepts, because of the extrapolations that could be made from them or the expressive power that certain theoretical conceptions offer relative to others.

    I was originally proposing how the process of categorical conception takes place by direct interactions of children or individuals, without passing of definitions second hand, or from the overall anthropological point of view. I think it is compatible with your proposal. Let's assume that people inhabited a zero dimensional universe and only experienced different quantities over time. Lets take the numbers 1 and 2. Since only two numbers exist, there is no need to classify them. If we experience 5, we could decide that our mental is poor, and classify 1 and 2 as class A, and 5 as class B. This now becomes the vocabulary of our mental process, albeit with little difference to our predictive capability. (This case would be more interesting if we had multiple features to correlate.) If we further experience 3, we have two sensible choices that we could make. We could either decide that all numbers are in the same class, making our perspective of all experience non-discerning, or decide that 1, 2, and 3 are in one class, contrasted with the class of 5. The distinction is, that if all numbers are in the same class, considering the lack of continuity, we could speculate that 4 exists. Thus, there is a predictive difference.

    In reality, we are dealing with vast multi-dimensional data sets, but we are applying similar process of grouping experience together, extrapolating the data within the groups, recognizing objects to their most fitting group and predicting their properties based on the anticipated features of the class space at their location.

    P.S.: I agree with your notion for the process of concept refinement, I think.
  • Possibility
    2.8k
    Not all people have practical use for the technical definition, since their life's occupation does not demand it and they have no personal interest in it. But I was contending that those who do use the fully articulated concept, will actually stay mentally committed to its full detail, even when they use it crudely in routine actions. Or at least for the most part. They could make intentional exceptions to accommodate conversations. They just wont involve the full extent of their knowledge at the moment. Further, people can disagree on concepts, because of the extrapolations that could be made from them or the expressive power that certain theoretical conceptions offer relative to others.simeonz

    I think I see this. A fully articulated concept is rarely (if at all) stated in its full detail - definitions are constructed from a cascade of conceptual structures: technical terms each with their own technical definitions constructed from more technical terms, and so on. For the purpose of conversations (and to use a visual arts analogy), da Vinci might draw the Vitruvian Man or a stick figure - it depends on the details that need to be transferred, the amount of shared conceptual knowledge we can rely on between us, and how much attention and effort each can spare in the time available.

    I spend a great deal of time looking up and researching terms, concepts and ideas I come across in forum discussions here, because I’ve found that my own routine or common-language interpretations aren’t detailed enough to understand the application. I have that luxury here - I imagine I would struggle to keep up in a face-to-face discussion of philosophy, but I think I am gradually developing the conceptual structures to begin to hold my own.

    Disagreement on concepts here are often the result of both narrow and misaligned qualitative structures or patterns of instances and their extrapolations - posters here such as Tim Wood encourage proposing definitions, so that this variability can be addressed early in the discussion. It’s not always approached as a process of concept refinement, but can be quite effective when it is.

    I will address the rest of your post when I have more time...
  • simeonz
    310
    For the purpose of conversations (and to use a visual arts analogy), da Vinci might draw the Vitruvian Man or a stick figure - it depends on the details that need to be transferred, the amount of shared conceptual knowledge we can rely on between us, and how much attention and effort each can spare in the time available.Possibility
    Yes, the underlying concept doesn't change, but just its expression or application. Although, not just in relation to communication, but also its personal use. Concepts can be applied narrowly by the individual for recognizing objects by their superficial features, but then they are still entrenched in full detail in their mind. The concept is subject to change, as you described, because it is gradually refined by the individual and by society. The two, the popularly or professionally ratified one and the personal one, need not agree, and individuals may not always agree on their concepts. Not just superficially, by how they apply the concepts in a given context, but by how those concepts are explained in their mind. However, with enough experience, the collectively accepted technically precise definition is usually the best, because even if sparingly applied in professional context, it is the most detailed one and can be reduced to a distilled form, by virtue of its apparent consequences, for everyday use if necessary.

    The example I gave, with the zero-dimensional inhabitant was a little bloated and dumb, but it aimed to illustrate that concepts correspond to partitionings of the experience. This means that they are both not completely random, because they are anchored at experience, direct or indirect, and they are a little arbitrary too, because there are multiple ways to partition the same set. I may elaborate the example at a later time, if you deem necessary.
  • Possibility
    2.8k
    The concept is subject to change, as you described, because it is gradually refined by the individual and by society. The two, the popularly or professionally ratified one and the personal one, need not agree, and individuals may not always agree on their concepts. Not just superficially, by how they apply the concepts in a given context, but by how those concepts are explained in their mind. However, with enough experience, the collectively accepted technically precise definition is usually the best, because even if sparingly applied in professional context, it is the most detailed one and can be reduced to a distilled form, by virtue of its apparent consequences, for everyday use if necessary.simeonz

    The best definition being the broadest and most inclusive in relation to instances. So long as we keep in mind that the technical definition is neither precise nor stable - only relatively so. Awareness of, connection to and collaboration with the qualitative variability in even the most precise definition is all part of this process of concept refinement.

    The example I gave, with the zero-dimensional inhabitant was a little bloated and dumb, but it aimed to illustrate that concepts correspond to partitionings of the experience. This means that they are both not completely random, because they are anchored at experience, direct or indirect, and they are a little arbitrary too, because there are multiple ways to partition the same set. I may elaborate the example at a later time, if you deem necessary.simeonz

    I’m glad you added this. I have some issues with your example - not the least of which is its ‘zero-dimensional’ or quantitative description, which assumes invariability of perspective and ignores the temporal aspect. You did refer to multiple inhabitants, after all, as well as the experience of different quantities ‘over time’, suggesting a three-dimensional universe, not zero. It is the mental process of a particular perspective that begins with a set of quantities - but even without partitioning the set, qualitative relation exists between these experienced quantities to differentiate 1 from 2. A set of differentiated quantities is at least one-dimensional, in my book.
  • simeonz
    310
    I’m glad you added this. I have some issues with your example - not the least of which is its ‘zero-dimensional’ or quantitative description, which assumes invariability of perspective and ignores the temporal aspect.Possibility
    Actually, there are multiple kinds of dimensions here. The features that determine the instant of experience are indeed in one dimension. What I meant is that the universe of the denizen is trivial. The spatial aspect is zero-dimensional, the spatio-temporal aspect is one-dimensional. The quantities are the measurements (think electromagnetic field, photon frequencies/momenta) over this zero-dimensional (one-dimensional with the time axis included) domain. Multiple inhabitants are difficult to articulate, but such defect from the simplifcation of the subject is to be expected. You can imagine complex communication would require more then a single point, but that breaks my intended simplicity.

    The idea was this - the child denizen is presented with number 1. Second experience during puberty is number 2. Third experience, during adolescence is number 5. And final experience during adulthood is number 4. The child denizen considers that 1 is the only possibility. Then, after puberty realizes that 1 and 2 both can happen. Depending on what faculties for reason we presume here, they might extrapolate, but lets assume only interpolation for the time being. The adult denizen encounters 5 and decides to group experiences in category A for 1 and 2 and category B for 5. This facilitates its thinking, but also means that it doesn't have strong anticipation for 3 and 4, because A and B are considered distinct. Then it encounters 3 and starts to contemplate, if 1, 2, 3, and 5 are the same variety of phenomenon with 4 missing yet, but anticipated in the future, or 1, 2, 3 are one group that inherits semantically A (by extending it) and 5 remains distinct. This is a choice that changes the predictions it makes for the future. If two denizens were present in this world, they could contend on the issue.

    This resembles a problem called "cluster analysis". I proposed that this is how our development of new concepts takes place. We are trying to contrast some things we have encountered with others and to create boundaries to our interpolation. In reality, we are not measuring individual quanta. We are receiving multi-dimensional data that heavily aggregates measurements, we perform feature extraction/dimensionality reduction and then correlate multiple dimensions. This also allows us to predict missing features during observation, by exploiting our knowledge of the prior correlations.
  • Possibility
    2.8k
    Ok, I think I can follow you here. The three-dimensional universe of the denizen is assumed trivial, and collapsed to zero. I want to make sure this is clear, so that we don’t invite idealist interpretations that descend into solipsism. I think this capacity (and indeed tendency) to collapse dimensions or to ignore, isolate and exclude information is an important part of the mental process in developing, qualifying and refining concepts.

    Because it isn’t necessarily just the time axis that renders the universe one-dimensional, but the qualitative difference between 1 and 2 as experienced and consolidated quantities. It is this qualitative relation that presents 2 as not-1, 5 as not-3, and 3 as closer to 2 than to 5. Our grasp of numerical value structure leads us to take this qualitative relation for granted.

    I realise that it seems like I’m being pedantic here. It’s important to me that we don’t lose sight of these quantities as qualitative relational structures in themselves. We can conceptualise but struggle to define more than three dimensions, and so we construct reductionist methodologies (including science, logic, morality) and complex societal structures (including culture, politics, religion, etc) as scaffolding that enables us to navigate, test and refine our conceptualisation of what currently appears (in my understanding) to be six dimensions of relational structure.

    How we define a concept relies heavily on these social and methodological structures to avoid prediction error as we interact across all six dimensions, but they are notoriously subjective, unstable and incomplete. When we keep in mind the limited three-dimensional scope of our concept definitions (like a map that fails to account for terrain or topology) and the subjective uncertainty of our scaffolding, then I think we gain a more accurate sense of our conceptual systems as heuristic in relation to reality.
  • Possibility
    2.8k
    In relation to the OP...

    From IEP: “So, the issue at the heart of the Problem of the Criterion is how to start our epistemological theorizing in the correct way, not how to discover a theory of the nature of truth.“

    The way I see it, the correct way to start our epistemological theorising is to acknowledge the contradiction at the heart of the Problem of the Criterion. We can never be certain which propositions are true, nor can we be certain of the accuracy in our methodologies to determine the truth of propositions. Epistemology in relation to truth is a process of incremental advancement towards this paradox - by interaction between identification of instances in experience and definition of the concept via reductionist methodologies.

    If an instance doesn’t refine either the definition or the methodology, then it isn’t contributing to knowledge. To focus our mental energies on calculating probability, for instance, only consolidates existing knowledge - it doesn’t advance our understanding, choosing to ignore the Problem and exclude qualitative variability instead of facing the inevitable uncertainty in our relation to truth. That’s fine, as long as we recognise this ignorance, isolation or exclusion of qualitative aspects of reality as part of the mental process. Because when we apply this knowledge as defined, our application must take into account the limitations of the methodology and resulting definition in relation to our capacity to accurately identify instances. In other words, we need to add the qualitative variability back in, or else we limit our practical understanding of reality - which is arguably more important. So, by the same token, if a revised definition or reductionist methodology doesn’t improve our experience of instances, thereby reducing prediction error, then it isn’t contributing to knowledge.
  • simeonz
    310

    Edit: Sorry for not replying, but I am in a sort of a flux. I apologize, but I expect that I may tarry awhile between replies even in the future.

    This is too vast a landscape to be dealt with properly in a forum format. I know this is sort-of a bail out from me, but really, it is a serious subject. I wouldn't be the right person to deal with it, because I don't have the proper qualification.

    The oversimplification I made was multi-fold. First, I didn't address hereditary and collective experience. It involved the ability to discern the quantities, a problem for which you inquired, and which I would have explained as genetically inclined. How genetics influence conceptualization and the presence of motivation for natural selection that fosters basic awareness of endemic world features need to be explained. Second, I reduced the feature space to a single dimension, an abstract integer, which avoided the question of making correlations and having to pick most discerning descriptors, i.e. dimensionality reduction. I also compressed the object space, to a single point, which dispensed with a myriad of issues, such as identifying objects in their environment, during stasis or in motion, anticipation of features obscured from view, assessment of orientation, assessment of distance.

    The idea of this oversimplification was merely to illustrate how concepts correspond to classes in taxonomies of experience. And in particular, that there is no real circularity. There was ambiguity stemming from the lack of unique ascription of classes to a given collection of previously observed instances. Such as in the case of 3, there is inherent inability to decide whether it falls into the group of 1 and 2, or bridges 1 and 2 with 5. However, assigning 1 and 3 to one class, and 2 and 5 to a different class would be solving the problem counter-productively. Therefore, the taxonomy isn't formed in arbitrary personal fashion. It follows the objective of best discernment without excessive distinction.

    No matter what process actually attains plausible correspondence, what procedure is actually used to create the taxonomy, no matter the kind of features that are used to determine the relative disposition of new objects/samples to previous object/samples and how the relative locations of each one is judged, what I hoped to illustrate was that concepts are not designed so much according to their ability to describe common structure of some collection of objects, but according to their ability to discriminate objects from each other in the bulk of our experience. This problem can be solved even statically, albeit with enormous computational expense.

    What I hoped to illustrate is that concepts can both be fluid and stable. New objects/impressions can appear in previously unpopulated locations of our experience, or unevenly saturate locations to the extent that new classes form from the division of old ones, or fill the gaps between old classes, creating continuity between them and merging them together. In that sense, the structure of our concept map is flexible. Hence, our extrapolations, our predictions, which depend on how we partition our experience into categories with symmetric properties, change in the process. Concepts can converge, because experience, in general, accumulates, and can also converge. The concepts, in theory, should gradually reach some maximally informed model.

    Again, to me, all this corresponds to the "cluster analysis" and "dimensionality reduction" problems.

    You are correct, that I did presuppose quantity discernment and distance measurement (or in 1-D difference computation). The denizen knows how to deal with the so called "affine spaces". I didn't want to go there. That opens an entirely new discussion.

    Just to scratch the surface with a few broad strokes here. We know we inherit a lot genetically, environmentally, culturally. Our perception system, for example, utilizes more then 5 senses that we manage to somehow correlate. The auditory and olfactory senses are probably the least detailed, being merely in stereo. But the visual system starts with about 6-million bright illumination photoreceptor cells and many more low illumination photoreceptor cells, unevenly distributed on the retina. Those are processed by a cascade of neural networks, eventually ending in the visual cortex and visual association cortex. In between, people merge the monochromatic information from the photoreceptors into color spectrum information, ascertain depth, increase the visual acuity of the image by superimposing visual input from many saccadic eye movements (sharp eye fidgeting), discern contours, detect objects in motion, etc. I am no expert here, but I want to emphasize that we have inherited a lot of mental structure in the form of hierarchical neural processing. Considering that the feature space of our raw senses is in the millions of bits, having perceptual structure as heritage plays a crucial role in our ability to further conceptualize our complex environment, by reinforcement, by trial and error.

    Another type of heritage is proposed by Noam Chomsky. He describes, for which there is apparently evidence, that people are not merely linguistic by nature, but endowed with inclinations to easily develop linguistic articulations of specific categories of experience in the right environment. Not just basic perception related concepts, but abstract tokens of thought. This may explain why we are so easily attuned to logic, quantities, social constructs of order, pro-social behaviors, like ethical behaviors, affective empathy (i.e. love) etc. I am suggesting that we use classification to develop concepts from individual experience. This should happen inside the neural network of our brain, somewhere after our perception system and before decision making. I am only addressing part of the issue. I think that nature also genetically programs classifiers in the species behavior, by incorporating certain awareness of experience categories in their innate responses. There is also the question of social Darwinism. Because natural selection applies to the collective, the individuals are not necessarily compelled to identical conceptualization. Some conceptual inclinations are conflicting, to keep the vitality of the community.
  • Possibility
    2.8k
    Sorry for not replying, but I am in a sort of a flux. I apologize, but I expect that I may tarry awhile between replies even in the future.

    This is too vast a landscape to be dealt with properly in a forum format. I know this is sort-of a bail out from me, but really, it is a serious subject. I wouldn't be the right person to deal with it, because I don't have the proper qualification.
    simeonz

    No problem. Persist with the flux - I think it can be a productive state to be in, despite how it might feel. I realise that my approach to this subject is far from conventional, so I appreciate you making the effort to engage. I certainly don’t have any ‘proper’ qualifications in this area myself. But I also doubt that anyone would be sufficiently qualified on their own. As you say, the landscape is too vast. In my view that makes a forum more suitable, not less.

    The idea of this oversimplification was merely to illustrate how concepts correspond to classes in taxonomies of experience. And in particular, that there is no real circularity. There was ambiguity stemming from the lack of unique ascription of classes to a given collection of previously observed instances. Such as in the case of 3, there is inherent inability to decide whether it falls into the group of 1 and 2, or bridges 1 and 2 with 5. However, assigning 1 and 3 to one class, and 2 and 5 to a different class would be solving the problem counter-productively. Therefore, the taxonomy isn't formed in arbitrary personal fashion. It follows the objective of best discernment without excessive distinction.simeonz

    It is the qualification of ‘best discernment without excessive distinction’ that perhaps needs more thought. Best in what sense? According to which value hierarchy? And at what point is the distinction ‘excessive’? It isn’t that the taxonomy is formed in an arbitrarily personal fashion, but rather intersubjectively. It’s a process and methodology developed initially through religious, political and cultural trial and error - manifesting language, custom, law and civility as externally predictive, four-dimensional landscapes from the correlation of human instances of being.

    The recent psychology/neuroscience work of Lisa Feldman Barrett in developing a constructed theory of emotion is shedding light on the ‘concept cascade’, and the importance of affect (attention/valence and effort/arousal) in how even our most basic concepts are formed. Alongside recent descriptions in physics (eg. Carlo Rovelli) of the universe consisting of ‘interrelated events’ rather than objects in time, Barrett’s theory leads to an idea of consciousness as a predictive four-dimensional landscape from ongoing correlation of interoception and conception as internally constructed, human instances of being.

    But the challenge (as Rovelli describes) is to talk about reality as four-dimensional with a language that is steeped in a 3+1 perspective (subject-object and tensed verb). Consider a molecular structure of two atoms ‘sharing’ an electron - in a similar way, the structure of human consciousness can be seen to consist of two constructed events ‘sharing’ a temporal aspect. This five-dimensional anomalous relation of potentiality/knowledge/value manifests as an ongoing prediction in affect: the instructional ‘code’ for both interoception and conception. How we go about proving this is beyond my scientific capacity, but I believe the capacity exists, nonetheless. As philosophers, our task is to find a way to frame the question.

    No matter what process actually attains plausible correspondence, what procedure is actually used to create the taxonomy, no matter the kind of features that are used to determine the relative disposition of new objects/samples to previous object/samples and how the relative locations of each one is judged, what I hoped to illustrate was that concepts are not designed so much according to their ability to describe common structure of some collection of objects, but according to their ability to discriminate objects from each other in the bulk of our experience. This problem can be solved even statically, albeit with enormous computational expense.

    What I hoped to illustrate is that concepts can both be fluid and stable. New objects/impressions can appear in previously unpopulated locations of our experience, or unevenly saturate locations to the extent that new classes form from the division of old ones, or fill the gaps between old classes, creating continuity between them and merging them together. In that sense, the structure of our concept map is flexible. Hence, our extrapolations, our predictions, which depend on how we partition our experience into categories with symmetric properties, change in the process. Concepts can converge, because experience, in general, accumulates, and can also converge. The concepts, in theory, should gradually reach some maximally informed model.
    simeonz

    I agree that concepts are not designed according to their ability to describe common structure or essentialism, but to differentiate between aspects of experience. Partitioning our experience into categories is part of the scientific methodology by which we attempt to make sense of reality in terms of ‘objects in time’.

    I also agree that concepts can be perceived as both fluid and stable. This reflects our understanding of wave-particle duality (I don’t think this is coincidental). But I also think the ‘maximally-informed model’ we’re reaching for is found not in some eventual stability of concepts, but in developing an efficient relation to their fluidity - in our awareness, connection and collaboration with relations that transcend or vary conceptual structures.

    It’s more efficient to discriminate events than objects from each other in the bulk of our experience. Even though our language structure is based on objects in time, we interact with the world not as an object, but as an event at our most basic, and that event is subject to ongoing variability. ‘Best discernment without excessive distinction’ then aims for allostasis - stability through variability - not homeostasis. This relates to Barrett as mentioned above.

    I guess I wanted to point out that there is more structural process to the development of concepts than categorising objects of experience through cluster analysis or dimensionality reduction, and that qualitative relations across multiple dimensional levels play a key role.
  • simeonz
    310
    It is the qualification of ‘best discernment without excessive distinction’ that perhaps needs more thought. Best in what sense? According to which value hierarchy? And at what point is the distinction ‘excessive’? It isn’t that the taxonomy is formed in an arbitrarily personal fashion, but rather intersubjectively. It’s a process and methodology developed initially through religious, political and cultural trial and error - manifesting language, custom, law and civility as externally predictive, four-dimensional landscapes from the correlation of human instances of being.Possibility

    You are right that many complex criteria are connected to values, but the recognition of basic object features, I believe is not. As I mentioned, we should account for the complex hierarchical cognitive and perceptual faculties with which we are endowed from the get go. At least, we know that our perceptual system is incredibly elaborate, and doesn't just feed raw data to us. As infants, we don't start from a blank slate and become conditioned by experience and interactions to detect shapes, recognize objects, assess distances. Those discernments that are essential to how we later create simple conceptualizations and are completely hereditary. And although this is a more tenuous hypothesis, like Noam Chomsky, I do actually believe that some abstract notions, such as length, order and symmetry, identity, compositeness, self, etc - are actually biologically pre-programmed. Not to the point, where they are inscribed directly in the brain, but their subsequent articulation is heavily inclined, and under exposure to the right environment, the predispositions trigger infant conceptualization. I think of this through an analogy with embryonic development. Fertilized eggs cannot develop physically outside the womb, but in its conditions, they are programmed to divide and organize rapidly into a fetus form. I think this happens neurologically with us when we are exposed to the characteristic physical environment during infancy.

    This heritage hypothesis can appear more reasonable in light of the harmonious relationship between any cognizant organism and the laws of the environment in which it operates. To some extent, even bacterial lifeforms need to be robotically aware of the principles governing their habitat. Our evolutionary history transpired in the presence of the same constraining factors, such as the inertial physical law for objects moving in the absence of forces, and thus it is understandable that our cognitive apparatus would be primed to anticipate the dynamics in question, with a rudimentary sense of lengths and quantities. Even if such notions are not explicit, the relationship between our reconstruction of the features of the world and the natural laws would be approximately homomorphic. And the hypothesis is, that at some point after the appearance of linguistic capabilities, we were further compelled by natural selection towards linguistic articulation of these mental reconstructions through hereditary conceptualization. Whereas fundamental discernment of features of appearances would have developed even earlier , being more involuntary and unconscious.

    The recent psychology/neuroscience work of Lisa Feldman Barrett in developing a constructed theory of emotion is shedding light on the ‘concept cascade’, and the importance of affect (attention/valence and effort/arousal) in how even our most basic concepts are formed. Alongside recent descriptions in physics (eg. Carlo Rovelli) of the universe consisting of ‘interrelated events’ rather than objects in time, Barrett’s theory leads to an idea of consciousness as a predictive four-dimensional landscape from ongoing correlation of interoception and conception as internally constructed, human instances of being.Possibility
    Maybe I am misreading the argument. Affective dispositions are essential to human behavior where social drives and other emotions come into the foray, but people also apply a layer of general intelligence. I will try to make a connection to a neurological condition of reduced amygdala volume, which renders people incapable of any affective empathy, and for the most part, highly diminishes their sense of anxiety. They are capable of feeling only anger or satisfaction, but the feelings fade quickly. Such individuals are extremely intelligent, literate, articulate. They conceptualize the world slightly differently, but are otherwise capable of the same task planning and anticipation. Considering the rather placated nature of their emotions (compared to a neurotypical), and the exhibition of reasonably similar perception of the world, intelligence isn't that reliant on affective conditions. Admittedly, they still do have cognitive dispositions, feel pain or pleasure, have basic needs as well, are unemotionally engaged with society and subject to culture and norms (to a smaller extent). But the significant disparity in affective stimuli and the relative closeness to us in cognitive output appears to imply that affective dispositions are a secondary factor for conceptualization. At least on a case by case basis. I am not implying that if we all had smaller amygdala volume, it wouldn't transform the social perception.

    I also agree that concepts can be perceived as both fluid and stable. This reflects our understanding of wave-particle duality (I don’t think this is coincidental). But I also think the ‘maximally-informed model’ we’re reaching for is found not in some eventual stability of concepts, but in developing an efficient relation to their fluidity - in our awareness, connection and collaboration with relations that transcend or vary conceptual structures.Possibility
    To be honest, it depends on whether a person can reach maximally informed state, or at least sufficiently informed state, with respect to a certain aspect of their overall experience. For example, quantum mechanics changed a lot about our perception of atoms, and atoms changed a lot about our perception of the reaction of objects to heat, but I think that to some extent, a chair is till a chair to us, as it was in antiquity. I think that while we might perceive certain features of a chair differently, such as what happens when we burn it, or how much energy is in it, or what is in it, its most basic character, namely that of an object which offers solid support for your body when you rest yourself on it, is unchanged. The problem with the convergence of information is its reliance on the potential to acquire most of the discernment value from a reasonably small number of observations. After all, this is a large universe, with intricate detail, lasting a long time.

    It’s more efficient to discriminate events than objects from each other in the bulk of our experience. Even though our language structure is based on objects in time, we interact with the world not as an object, but as an event at our most basic, and that event is subject to ongoing variability. ‘Best discernment without excessive distinction’ then aims for allostasis - stability through variability - not homeostasis. This relates to Barrett as mentioned above.Possibility
    I do believe that intelligence, to a great extent, functions like a computer trying to evaluate outcomes from actions according to a some system of values. The values are indeed derived from many factors. I do agree that there are implicit aspects to our intelligence strongly engaged with ecosystemic stability, where the person is only one actor in the environment and tries to enter into correct symbiotic alignment with it. The function of the personal intelligence becomes allostatically aimed, as you describe. On the other hand, there aspects to our intelligence, not always that clearly separated, but at least measurably autonomous from this type of conformant symbiotic thinking, that are concerned with representational accuracy. You are right there, that I was focusing more on this type of conceptual mapping, and indeed, it is the only one that is homeostatically aimed. In fact, the recent discussions in the forum were addressing the subject of belief and its relationship to truth, and I meant to express my opinion, which exactly follows these lines. That our personal ideas can seek alignment with the world either by exploring compelling facts outside of our control, or by maneuvering ourselves through the space of possible modes of being and trying to adjust according to our consequent experience. The distinction and the relationship between the two is apparently of interest, but is also difficult to reconcile. Also, I was referring to objects, but objects are merely aspects of situations. Even further, as you suggest, situations are merely aspects of our relation to the context in which these situations occur. I was simplifying on one hand, and also, I do indeed think that we do classify objects as well, since thankfully we have the neurological aptitude to separate them from the background and to compress their features, thanks to our inherited perception apparatus and rudimentary conceptualization skill.

    I guess I wanted to point out that there is more structural process to the development of concepts than categorising objects of experience through cluster analysis or dimensionality reduction, and that qualitative relations across multiple dimensional levels play a key role.Possibility
    In retrospect, I think that there are two nuances to intelligence, and I was addressing only one. The empirically representationally aimed one.

    Edit. I should also point out, that the intelligence you describe, is the more general mechanism. I have previously referred to a related notion of distinction, that of pragmatic truth versus representational truth. And pragmatic truth, as I have stated, is the more general form of awareness. But it is also the less precise and more difficult to operate. It is outside the boundary of empiricism. Your description of allostatic conceptualization is actually something slightly different, yet related. It brings a new quality to pragmatic truth for me. I usually focus on empirical truth. Not because I want to dispense with the other, but because it has the more obvious qualities. Even if both are evidently needed, if the latter then operates under the former.
  • Possibility
    2.8k
    I will try to make a connection to a neurological condition of reduced amygdala volume, which renders people incapable of any affective empathy, and for the most part, highly diminishes their sense of anxiety. They are capable of feeling only anger or satisfaction, but the feelings fade quickly. Such individuals are extremely intelligent, literate, articulate. They conceptualize the world slightly differently, but are otherwise capable of the same task planning and anticipation. Considering the rather placated nature of their emotions (compared to a neurotypical), and the exhibition of reasonably similar perception of the world, intelligence isn't that reliant on affective conditions. Admittedly, they still do have cognitive dispositions, feel pain or pleasure, have basic needs as well, are unemotionally engaged with society and subject to culture and norms (to a smaller extent). But the significant disparity in affective stimuli and the relative closeness to us in cognitive output appears to imply that affective dispositions are a secondary factor for conceptualization. At least on a case by case basis.simeonz

    This is a common misunderstanding of affect and the amygdala, supported by essentialism, mental inference fallacy and the misguided notion of a triune brain structure. The amygdala has been more recently proven NOT to be the source of emotion in the brain - it activates in response to novel situations, not necessarily emotional ones. Barrett refers to volumes of research dispelling claims that the amygdala is the brain location of emotion (even of fear or anxiety). Interpretations of behaviour in those with reduced or even destroyed amygdala appear to imply the secondary nature of affect because that’s our preference. We like to think of ourselves as primarily rational beings, with the capacity to ‘control’ our emotions. In truth, evidence shows that it’s more efficient to understand and collaborate with affect in determining our behaviour - we can either adjust for affect or try to rationalise it after the fact, but it remains an important aspect of our relation to reality.

    The bottom line is this: the human brain is anatomically structured so that no decision or action can be free of interoception and affect, no matter what fiction people tell themselves about how rational they are. Your bodily feeling right now will project forward to influence what you will feel and do in the future. It is an elegantly orchestrated self-fulfilling prophecy, embodied within the architecture of your brain. — Lisa Feldman Barrett, ‘How Emotions Are Made’

    I’m not sure which research or case studies you’re referring to above (I’m not sure if the subjects were born with reduced amygdala or had it partially removed and I think this makes a difference in how I interpret the account) but from what you’ve provided, I’d like to make a few points. I don’t think that an impaired or reduced access to interoception of affect makes much difference to one’s capacity for conceptualisation, or their intelligence as commonly measured. I think it does, however make a difference to their capacity to improve accuracy in their conceptualisation of social reality in particular, and to their overall methodology in refining concepts. They lack information that enables them to make adjustments to behaviour based on social cues, but thanks to the triune brain theory and our general preference for rationality, they’re unlikely to notice much else in terms of ‘impairment’.

    I would predict that they may also have an interest in languages, mathematics, logic and morality - because these ensure they have most of the information they need to develop concepts without the benefit of affect. They may also have a sense of disconnection between their physical and mental existence, relatively less focus on sporting or sexual activity, and an affinity for computer systems and artificial intelligence.

    As for anxiety, this theoretically refers to the amount of prediction error we encounter from a misalignment of conception and interoception. If there’s reduced access to interoception of affect by conceptualisation systems, there’s less misalignment.

    You are right that many complex criteria are connected to values, but the recognition of basic object features, I believe is not. As I mentioned, we should account for the complex hierarchical cognitive and perceptual faculties with which we are endowed from the get go. At least, we know that our perceptual system is incredibly elaborate, and doesn't just feed raw data to us. As infants, we don't start from a blank slate and become conditioned by experience and interactions to detect shapes, recognize objects, assess distances. Those discernments that are essential to how we later create simple conceptualizations and are completely hereditary. And although this is a more tenuous hypothesis, like Noam Chomsky, I do actually believe that some abstract notions, such as length, order and symmetry, identity, compositeness, self, etc - are actually biologically pre-programmed. Not to the point, where they are inscribed directly in the brain, but their subsequent articulation is heavily inclined, and under exposure to the right environment, the predispositions trigger infant conceptualization. I think of this through an analogy with embryonic development. Fertilized eggs cannot develop physically outside the womb, but in its conditions, they are programmed to divide and organize rapidly into a fetus form. I think this happens neurologically with us when we are exposed to the characteristic physical environment during infancy.simeonz

    Well, not once you’ve identified them as objects, no. I don’t think that’s how these initial concepts are developed, though. I think the brain and sensory systems are biologically structured to develop a variety of conceptual structures rapidly and efficiently, some even prior to birth. Barrett compares early development of concepts to the computer process of sampling, where similarities are separated from differences, and only what is different from one frame or pixel to the next is transmitted:

    For example, the visual system represents a straight line as a pattern of neurons firing in the primary visual cortex. Suppose that a second group of neurons fires to represent a second line at a ninety-degree angle to the first line. A third group of neurons could summarise this statistical relationship between the two lines efficiently as a simple concept of ‘angle’. The infant brain might encounter a hundred different pairs of intersecting line segments of varying lengths, thicknesses, and colour, but conceptually they are all instances of ‘angle’, each of which gets efficiently summarised by some smaller group of neurons. These summaries eliminate redundancy. In this manner, the brain separates statistical similarities from sensory differences. — Barrett

    I do, however, believe that notions such distance, shape, space, time, value and meaning refer to an underlying qualitative structure of reality that is undeniable. We ‘feel’ these notions long before we’re able to conceptualise them.

    This heritage hypothesis can appear more reasonable in light of the harmonious relationship between any cognizant organism and the laws of the environment in which it operates. To some extent, even bacterial lifeforms need to be robotically aware of the principles governing their habitat. Our evolutionary history transpired in the presence of the same constraining factors, such as the inertial physical law for objects moving in the absence of forces, and thus it is understandable that our cognitive apparatus would be primed to anticipate the dynamics in question, with a rudimentary sense of lengths and quantities. Even if such notions are not explicit, the relationship between our reconstruction of the features of the world and the natural laws would be approximately homomorphic. And the hypothesis is, that at some point after the appearance of linguistic capabilities, we were further compelled by natural selection towards linguistic articulation of these mental reconstructions through hereditary conceptualization. Whereas fundamental discernment of features of appearances would have developed even earlier , being more involuntary and unconscious.simeonz

    I think bacterial lifeforms are aware of the principles governing their habitat only to the extent that they impact allostasis. Any rudimentary sense of values would be initially qualitative, not quantitative - corresponding to the ongoing interoception of valence and arousal in the organism. But as Barrett suggests, the neuronal structure conceptualises in order to summarise for efficiency, separating statistical similarities from sensory differences to eliminate redundancy. Our entire evolutionary development has been in relation to the organism’s capacity to more efficiently construct and refine conceptual systems and structures for allostasis from a network of interoceptive systems. The systems and network we’ve developed now consist of whole brain processes, degeneracy, feedback loops and a complex arrangement of checks and balances, budgeting the organism’s ongoing allocation of attention and effort.

    This is a long post already - I will return to the rest of your post later...
  • Possibility
    2.8k
    To be honest, it depends on whether a person can reach maximally informed state, or at least sufficiently informed state, with respect to a certain aspect of their overall experience. For example, quantum mechanics changed a lot about our perception of atoms, and atoms changed a lot about our perception of the reaction of objects to heat, but I think that to some extent, a chair is till a chair to us, as it was in antiquity. I think that while we might perceive certain features of a chair differently, such as what happens when we burn it, or how much energy is in it, or what is in it, its most basic character, namely that of an object which offers solid support for your body when you rest yourself on it, is unchanged. The problem with the convergence of information is its reliance on the potential to acquire most of the discernment value from a reasonably small number of observations. After all, this is a large universe, with intricate detail, lasting a long time.simeonz

    I think this sense that a chair is still a chair to us relates to goal-oriented concepts. Barrett references the work of cognitive scientist Lawrence W. Barsalou, and demonstrates that we are pre-programmed to develop goal-oriented concepts effortlessly: to categorise seemingly unconnected instances - such as a fly swatter, a beekeeper’s suit, a house, a car, a large trash can, a vacation in Antarctica, a calm demeanour and a university degree in etymology - under purely mental concepts such as ‘things that protect you from stinging insects’. “Concepts are not static but remarkably malleable and context-dependent, because your goals can change to fit the situation.” So if an object meets that goal for you, then it’s a chair, whether it’s made of wood or plastic, shaped like a box or a wave, etc.

    In retrospect, I think that there are two nuances to intelligence, and I was addressing only one. The empirically representationally aimed one.

    Edit. I should also point out, that the intelligence you describe, is the more general mechanism. I have previously referred to a related notion of distinction, that of pragmatic truth versus representational truth. And pragmatic truth, as I have stated, is the more general form of awareness. But it is also the less precise and more difficult to operate. It is outside the boundary of empiricism. Your description of allostatic conceptualization is actually something slightly different, yet related. It brings a new quality to pragmatic truth for me. I usually focus on empirical truth. Not because I want to dispense with the other, but because it has the more obvious qualities. Even if both are evidently needed, if the latter then operates under the former.
    simeonz

    Charles Peirce’s pragmaticist theory of fallibilism, as described in Wikipedia’s article on empiricism: “The rationality of the scientific method does not depend on the certainty of its conclusions, but on its self-corrective character: by continued application of the method science can detect and correct its own mistakes, and thus eventually lead to the discovery of truth". The historical oppression of pragmatic truth by empirical truth translates to a fear of uncertainty - of being left without solid ground to stand on.

    Yes, pragmatic truth is less precise in a static sense, but surely we are past the point of insisting on static empirical statements? Quantum mechanics didn’t just change our perception of atoms, but our sense that there is a static concreteness underlying reality. We are forced to concede a continual state of flux, which our sensory limitations as human observers require us to statistically summarise and separate from its qualitative variability, in order to relate it to our (now obviously limited sense of) empirical truth. Yet pragmatically, the qualitative variability of quantum particles is regularly applied as a prediction of attention and effort with unprecedented precision and accuracy.

    Max Planck struggled for years to derive a formula that fit the experimental data. In frustration, he decided to work the problem backward. He would first try to guess a formula that agreed with the data and then, with that as a hint, try to develop the proper theory. In a single evening, studying the data others had given him, he found a fairly simple formula that worked perfectly. — Rosenblum and Kuttner, ‘Quantum Enigma: Physics Encounters Consciousness’
  • simeonz
    310
    This is a common misunderstanding of affect and the amygdala, supported by essentialism, mental inference fallacy and the misguided notion of a triune brain structure. The amygdala has been more recently proven NOT to be the source of emotion in the brain - it activates in response to novel situations, not necessarily emotional ones. Barrett refers to volumes of research dispelling claims that the amygdala is the brain location of emotion (even of fear or anxiety). Interpretations of behaviour in those with reduced or even destroyed amygdala appear to imply the secondary nature of affect because that’s our preference. We like to think of ourselves as primarily rational beings, with the capacity to ‘control’ our emotions. In truth, evidence shows that it’s more efficient to understand and collaborate with affect in determining our behaviour - we can either adjust for affect or try to rationalise it after the fact, but it remains an important aspect of our relation to reality.Possibility
    I know that it is me who brought it up, but I dare say that the precise function of the amygdala is not that relevant to our discussion. Unless you are drawing conclusions from the mechanism by which people attain these anomalous traits, I would consider the explanation outside the topic. Regarding the quality in cognitive and neurological research, I assume that interpretational lattitude exists, but the conclusions are still drawn from correlations between activation of the brain region and cues of affects after exposure to perceptual stimulus. From brief skimming over the summary of a few recent papers, I am left with the impression that there appears to be no clear and hard assertion at present, but what is stated is that there might be primary and secondary effects, and interplay between this limbic component and other cognitive functions. Until I have evidence that allows me to draw my own conclusion, I am assuming that the predominant opinion of involvement in emotional processing is not completely incorrect.

    I’m not sure which research or case studies you’re referring to above (I’m not sure if the subjects were born with reduced amygdala or had it partially removed and I think this makes a difference in how I interpret the account) but from what you’ve provided, I’d like to make a few points. I don’t think that an impaired or reduced access to interoception of affect makes much difference to one’s capacity for conceptualisation, or their intelligence as commonly measured. I think it does, however make a difference to their capacity to improve accuracy in their conceptualisation of social reality in particular, and to their overall methodology in refining concepts. They lack information that enables them to make adjustments to behaviour based on social cues, but thanks to the triune brain theory and our general preference for rationality, they’re unlikely to notice much else in terms of ‘impairment’.Possibility
    Psychiatry labels the individuals I was referring to as having antisocial personality disorder, but that is a broad stroke diagnosis. The hereditary variant of the condition goes under additional titles in related fields - forensic psychology and neurology call it psychopathy. Since psychopaths are not experiencing overwhelming discomfort from their misalignment with pro-social behaviors, they are almost never voluntary candidates for treatment and are rather poorly researched. I am not at all literate on the subject, but I am aware of one paper that was produced in collaboration with such affected individual. According to the same person, a dozen of genes are potentially involved as well, some affecting neurotransmitter bindings and from my observation of the responses given from self-attestated psychopaths on quora, the individuals indeed confirm smaller amygdala volume. This is a small sample, but I am primarily interested that their callous-unemotional traits seem to be no obstruction to having reasonably eloquent exchanges. They can interpret situations cognitively, even if they lack emotional perception of social cues.

    Psychopaths do not report to be completely unemotive. They can enjoy a scenery. The production of gratifying feeling from successful mental anticipation and analysis of form, as you describe, from music or visual arts, is not foreign to them. Probably, less expressively manifest then in a neurotypical, but not outright missing.

    I would predict that they may also have an interest in languages, mathematics, logic and morality - because these ensure they have most of the information they need to develop concepts without the benefit of affect. They may also have a sense of disconnection between their physical and mental existence, relatively less focus on sporting or sexual activity, and an affinity for computer systems and artificial intelligence.Possibility
    There might be an allusion here. I am not getting my information first hand. I would characterize myself as neurotic. Granted, a psychopath would mask themselves, so you could make of it what you will, but I am at worst slightly narcissistic.

    As for anxiety, this theoretically refers to the amount of prediction error we encounter from a misalignment of conception and interoception. If there’s reduced access to interoception of affect by conceptualisation systems, there’s less misalignment.Possibility
    What you describe seems more like being in a surprised state. I am thinking more along the lines of oversensitivity and impulsiveness, heightened attention, resulting from the perception of impactfulness and uncertainty. In any case, psychopaths claim that both their fear and anxiety responses are diminished.

    I do, however, believe that notions such distance, shape, space, time, value and meaning refer to an underlying qualitative structure of reality that is undeniable. We ‘feel’ these notions long before we’re able to conceptualise them.Possibility
    I understand, that you specifically emphasize that we perceive and indeed this is opposition to Chomsky's theory of innate conceptualization. Granted, perception does not rely on abstractly coded mental awareness. But even if we agree to disagree regarding the plausibility of Chomsky's claim, what you call feeling, I could be justified to call perceptual cognition. Even pain is registration of objective physical stimulus (unless there is a neurological disorder of some kind), and as analytically-blocking and agonizing as it can be, it is not intended to be personally interpretative.

    I think bacterial lifeforms are aware of the principles governing their habitat only to the extent that they impact allostasis. Any rudimentary sense of values would be initially qualitative, not quantitative - corresponding to the ongoing interoception of valence and arousal in the organism. But as Barrett suggests, the neuronal structure conceptualises in order to summarise for efficiency, separating statistical similarities from sensory differences to eliminate redundancy. Our entire evolutionary development has been in relation to the organism’s capacity to more efficiently construct and refine conceptual systems and structures for allostasis from a network of interoceptive systems. The systems and network we’ve developed now consist of whole brain processes, degeneracy, feedback loops and a complex arrangement of checks and balances, budgeting the organism’s ongoing allocation of attention and effort.Possibility
    Again, interoception, when it expresses an objective relation between the subject and their environment, is simply perception. How do you distinguish this interoceptive awareness from being cognizant of the objective features of your surroundings? The fact is that we are able to percieve objects easily and to discern visual frame constituents quickly. There is specialization in the development of our brain structures and it is very important for drawing empirical information from our environment. Which suggests to me that empirical assessment is natural to us and part of our intellectual function.

    I think this sense that a chair is still a chair to us relates to goal-oriented concepts. Barrett references the work of cognitive scientist Lawrence W. Barsalou, and demonstrates that we are pre-programmed to develop goal-oriented concepts effortlessly: to categorise seemingly unconnected instances - such as a fly swatter, a beekeeper’s suit, a house, a car, a large trash can, a vacation in Antarctica, a calm demeanour and a university degree in etymology - under purely mental concepts such as ‘things that protect you from stinging insects’. “Concepts are not static but remarkably malleable and context-dependent, because your goals can change to fit the situation.” So if an object meets that goal for you, then it’s a chair, whether it’s made of wood or plastic, shaped like a box or a wave, etc.Possibility
    I do agree, that if we grouped only according to innate functions, every object that provides static mechanical connection between underlying surface and rested weight would be a chair. That would put a trash bin in the same category and it isn't in it. However, we do have a function concept of the mechanical connection, i.e. the concept of resting weight through intermediary solid, and it has not changed significantly by the discovery QM. We develop both function concepts and use concepts, intentionally, depending on our needs. The metrics through which we cluster the space of our experience can be driven by uses or functions, depending on our motivation for conceptualization.

    Charles Peirce’s pragmaticist theory of fallibilism, as described in Wikipedia’s article on empiricism: “The rationality of the scientific method does not depend on the certainty of its conclusions, but on its self-corrective character: by continued application of the method science can detect and correct its own mistakes, and thus eventually lead to the discovery of truth". The historical oppression of pragmatic truth by empirical truth translates to a fear of uncertainty - of being left without solid ground to stand on.Possibility
    Going back to the influence of QM and the convergence of physical concepts. Aristotle taught that movement depends on the presence of forces. Newton dismantled that notion. But we are still perceiving the world as mostly Aristotelian. I am aware of Newtonian physics and I do conceptualize the world as at least Newtonian. But I consider the Newtonian world as mostly Aristotelian in my average experience. New physical paradigms do not uproot entirely how we evaluate the features of our environment, but refine them. They revolutionize our perception of the extent of the physical law, which makes us reevaluate our physical theories and make us more observant. The same is true for relativity and QM.

    Yes, pragmatic truth is less precise in a static sense, but surely we are past the point of insisting on static empirical statements? Quantum mechanics didn’t just change our perception of atoms, but our sense that there is a static concreteness underlying reality. We are forced to concede a continual state of flux, which our sensory limitations as human observers require us to statistically summarise and separate from its qualitative variability, in order to relate it to our (now obviously limited sense of) empirical truth. Yet pragmatically, the qualitative variability of quantum particles is regularly applied as a prediction of attention and effort with unprecedented precision and accuracy.Possibility
    I am not sure which aspect of staticity you oppose. Truth does not apply to antropological realities in the same sense by default. As I stated in another thread, you cannot always support truth with evidence, because not all statements have this character. Antropological phenomena, including science, depend on the "rightness of approach", which is settled by consensus rather then just hard evidence. On the other hand, empirical truth underlies the aim of the scientific pursuit, and it is the quality of its attainment that can produce convergence. It may not be attained in reality, but if it is attained, the result will be gradually converging.

    Lets suppose that truth, as we can cognitively process it, is never static. Lets examine a few reasons why that could be.
    1. Is it that the world is essentially indescribable? That there is no symbolic representation that could ever encompass the manner of operation and state of nature even through approximate probabilistic homomorphism. This is equivalent to the conjecture that a machine making approximate probable assessments of the evolution of its locale is impossible to construct even in principle. Really? I would not make such conjecture, but it may be true. I am not omniscient. If such machine existed, however, then I see no reason why homo sapiens wouldn't be endowed through natural selection with similar cognitive functions, even if they are working towards social and ecological goals as well.
    2. Or maybe we cant perceive the world exhaustively and objectively, because of our sensory limitations? Would the character of those limitations be impossible to compensate with instrumented observations?
    3. Or maybe we lack the analytic and logical capacity? But we don't need to process information on the fly, if we are committed to analyzing facts retrospectively and draw conclusions.
    4. Or maybe we are constrained by non-analytic dispositions? I am not sure that emotions oppose logical reasoning. Do they? Even if emotion takes part of some open or closed loop in our mental process, does that immediately imply that it detracts from the objective empirical value of our conclusions? In fact, it might motivate them.

    I do not contend interoception. Appreciation of music and art is, I believe, interoceptive-analytical loop of sorts. Most mental actions involve a degree of satisfaction that manifests also interoceptively. I only contend that it sways our cognitive response from objective analysis of the information to some allostatically aimed impulsive reaction. For social interactions, as they are inherently subjective, this may be true, but for empiricism and physical feature analysis, I would say not so much.
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.