• Philosophical Progress & Other Metaphilosophical Issues
    Art, philosophy, literature, or music are much less about technology and more about an individual reflecting on the realities of his times.Bitter Crank

    So they would be technologies of the self, technologies of social advance, surely? The humanities are suppose to point the direction of desirable cultural change.

    Perhaps the question mark might be about the “progress” aspect. But there is evidence like Pinker’s claims about the diminishing levels of human violence. And individual freedom has increased in many ways.

    Of course, McJobs, obesity, consumerism, inequality, etc, etc. But those haven’t been actively promoted outcomes when it comes to the humanities.

    So I don’t see a problem with finding a historic progressive trend in philosophy as the technology of the self/the technology of social being. And the arts are about inventing ways to be new.

    If the argument is that they are not getting better, I wouldn’t be so sure. I’m impressed by whole new areas of expression like video art, graphic novels, grafitti art and challenging TV dramas.
  • Thoughts on Epistemology
    I don't see it as up to you to deliberately determine what is significant for you. That task has already been accomplished the moment you experience any event. You find yourself attending to something before you consciously will it.Joshs

    Yes. The "you" I have in mind is not some conscious being. This is about the cognitive architecture of brains. So what catches our attention is what fails to be dealt with at a habitual level of processing. A lifetime of experience serves as enough of a filter so that most of every moment can be left to automatic pilot. Attentional resources are reserved for the differences that make a difference at a habitual level of response - the ones that can't be assimilated to the similarity that is ... a habit.

    And what is relevant to you is that entitiy that is not so similar in relation to your construing history that it will not be noticed, and not so other that you will fail to assimilate it. The 'too other' is what is experienced via affectitites of fear, anger,etc. that paralyze our ability to go on.Joshs

    Yep. Things could be too outside of normal experience not to be assimilable even by attention-level processing. But even then, eventually we reach some kind of adjustment. We "conquer our fears" by forming some construct to which a class of events can be assimilated too.

    So the general principle applies. The mind doesn't build constructs by focusing on what thing have in common. Concepts are constraints on variety. They are about learning the differences that can be ignored, so as then to highlight the differences that are then key.

    Concepts are filters rather than collectors. They separate signal from noise.

    Maybe this is the difference between hoarders and minimalists? One can't bear to throw away anything - it all matters. The other is selective and finds order in becoming disinterested in inessential variety. :)

    Bateson shared some things with Kelly, but I prefer Kelly's phenomenological stance to Bateson's behaviroistic model of causation. 'Joshs

    How do you mean Bateson's behaviourism? I would have thought his informational/hierarchical feedback approach was pretty anti-Behaviourism.

    Behaviourism treated nerve networks as chains of physically triggered nodes. Sensory energy gives a network of poised physical connections a jolt, then that shot of energy just rattles around the circuit in mechanical fashion.

    But cybernetics stressed the informational aspect of neural action. And seeing networks as hierarchies again completely changes the paradigm. Now a jolt of sensory energy can only disturb the state of the system to the degree the system lets it. Through top-down constraint, it can damp the "input" just as readily as it amplifies it.

    Do you just mean that Bateson was overly physicalist by comparison to Kelly? That could be so. I only mention Bateson because he coined some good phrases, not because I find him totally reliable on all matters psychological. Clearly, as with schizophrenia, he could really screw up.

    And I've only faint familiarity with Kelly. But checking Wiki, I see his approach is exactly the dichotomy-based approach to categories that I take....

    Kelly defined constructs as bipolar categories—the way two things are alike and different from a third—that people employ to understand the world. Examples of such constructs are "attractive," "intelligent," "kind." A construct always implies contrast. So when an individual categorizes others as attractive, or intelligent, or kind, an opposite polarity is implied. This means that such a person may also evaluate the others in terms of the constructs "ugly," "stupid," or "cruel."

    So you find a person cruel to the degree that the person is not kind, and vice versa. That is, an individual or particular is located on the spectrum of possibility created by a complementary pair of generalities. This is that to the degree it is not the other.

    So a construct would be the spectrum that allows the binary judgement - an assimilation of a particular instance to one or other generality. But still, generalities are constraints, in my book. They assimilate the particular by ignoring irrelevant differences rather than collecting together the sufficiently similar.

    Is a three-legged rabbit still a rabbit? Why not if every rabbit has got some kind of difference and the loss of a leg doesn't make any essential difference. But maybe a race of three-legged rabbits exists. They are known as ribbits. Now it matters if our candidate was born one way or the other.

    The point about a constraints-based approach is that it demands the least work. To pick out the sufficiently similar is a lot of work. Each individual has to be inspected according to some checklist. They will always be different and so it is going to be a judgement whether the difference matters anyway.

    But it instead it is presumed that everything is the same until something critically different manifests, then that makes for efficient processing. And a dichotomous or bipolar construct spells out what "critically different" means at the level of absolute generality. It is the exact opposite of whatever pragmatically defines "sufficiently alike".

    Every beautiful person is a little bit ugly. But rather than fuss about the classification problem that appears to cause, we just take a broad-brush approach of accepting every person as beautiful until - in binary fashion - a person seems to fit better the folk who are in the class of "every ugly person is a little bit beautiful".

    We can of course add intermediate categories - the people who are just middling. But a constraints-based principle is still the low-effort approach. It doesn't demand every detail be judged for similarity. Only some general weight of "poor fit" has to be judged. Then the categorisation can flip over to its other pole.
  • Thoughts on Epistemology
    Or a difference that makes a difference.

    Truth by tautological similarity has all the familiar problems. But if similarity is defined in terms of the set of differences that an observer feels don't really matter, then you have the basis of a useful construct. Now the differences that make a difference pop right out.

    That is the Batesonian paradigm that has more traction in psychology these days I would say.

    The world is ripe with differences. Everything is different or individuated in some fashion. So the art of cognition is learning how much difference you can afford to ignore. That way, only the significant differences reach your attention.

    That is the kind of "Helmholtzian" cognitive architecture that an anticipatory neural network or Bayesian brain seeks to implement.
  • Thoughts on Epistemology
    When one says beliefs are concepts, it sounds like some explanation has been offered.Banno

    The dispositional view of Pragmatism emphasises the way that an adequate concept has to bring with it an adequate measurement. So a belief has this dichotomous structure. The idea is separate from its confirmation. However the idea also does itself tell us what kind of confirmation is suitable.

    So truth-telling can't transcend its own grounding conceptions. Yet also, the business of truth-telling can improve over time as it becomes measurably less subjective by being measureably more generic and public.

    From a nice review of that Misak book...

    This might seem rather surprising, given that Ramsey is usually associated with a redundancy or proto-deflationary theory of truth. But Misak argues that, after about 1926, Ramsey saw that an adequate account of truth needs to do more than note the equivalence of “p” and “‘p’ is true”: if one has a disposition toward a dispositional account of belief, as Ramsey did, then it’s natural to ask what sorts of dispositions, in general, go along with believing that p is true.

    Ramsey came to much the same conclusion as Peirce: the belief that p commits one to giving reasons for p and considering the evidence for and against it. Thus, on Misak’s reading of Ramsey, “if we unpack the commitments we incur when we assert or believe, we find that we have imported the notions of fact (vaguely conceived), experimentation, and standards for good belief” (230).

    Pragmatic approaches to meaning and truth thus offer a tidy, mutually-reinforcing package that is an attractive alternative to the more typical combination of a representational theory of meaning with a correspondence theory of truth—while also offering a meaningful extension beyond the truism at the heart of deflationism.

    [Review: Cambridge Pragmatism: From Peirce and James to Ramsey and Wittgenstein, by Cheryl Misak - John Capps]
  • Do numbers exist?
    This is a deep mystery. Our abstractions are telling us something about the world. We're not sure what.

    I don't think you and I disagree all that much.
    fishfry

    Great. I respect that you are strong on the mathematics. So I was hoping for a more productive discussion.

    Maths is unreasonably effective. It’s abstractions are more than mere intellectual accidents. There must be a reason for their Platonic seeming necessity. So therefore that is why the nature of mathematical truth remains so central to physicalist inquiry.

    If we are not sure, we still ought to be exploring with an open mind.

    My understanding is that we can accommodate abstract mental constructs quite easily within physicalism. Abstractions are thoughts, biochemical processes in my brain.

    But thoughts are still different from rocks. Thoughts and rocks are both physical processes, but they have a different character. One doesn't need dualism.
    fishfry

    Neuroscience believes thoughts to be informational processes, not biochemical ones. To use the easily abused computational analogy, the "material physics" explains nothing. You could implement the logic of a Turing machine in some system of tin cans and bits of twine.

    So a science of the mind definitely does need a dualist physicalism of some kind. There has to be some ontic difference between information and entropy, even if they also arise in some common (mutual) fashion.

    But putting that aside, the issue here is the epistemic one of a distinction between observers and observables. Classical physics just presumes that observers are free agents, able to make measurements of reality without disturbing that reality. And this supports the idea that thoughts and rocks are unproblematically separate. Not only are our conceptions of reality a free invention of the human mind, but so do our perceptions of reality enjoy a matching freedom from our ability to invent.

    That is, we invent the physics of rock motion. Then rocks have a motion which we can - without getting entangled and changing anything - concretely measure. There is no epistemic concern about the line between what is our ideas and what is reality.

    However we now know better. A clean break between observers and observables looks to have become fundamentally impossible.

    This epistemic shock doesn't seem to have registered with the mathematical community as far as I can see. The ontological options are still either that maths is a free invention or a perception of Platonic reality. Maths doesn't have to prove itself in the court of the real world, only in the court of logical opinion. It has to conform to the rules of an informational process - the syntax that is grounded in set theory, or category theory, or whatever other fundamental notion of a closed syntactical system happens to be in vogue at the time.

    Our brains go quite comfortably back and forth between the real and the unreal. Yet sane people alway know the difference.fishfry

    There is nothing so comfortable as a useful habit. Sanity is not having to think, it appears.

    But that is simply advising people to give up on physical inquiry. Quantum mechanics is true but seems insane. So don't think about it.

    There's some mathematical constant pi "out there."fishfry

    Yes, it is out there as a ratio capturing a primal relation of a physical world with some kind of limit-state perfect symmetry. Let that world be not perfectly flat, let it be non-Euclidean, and the value of pi starts to wander accordingly.

    Between the hyperbolic and the hyperspheric, there is only one geometry that is absolutely balanced enough that the value of pi is as stable as far as the eye can see. Whether your circles are big or small, now pi remains always the same.

    Whoops. Are we talking about the reality of relations here? How physically abstract. Whoops. Are we talking about the presumed scale-invariance of observables? How mentally abstract.

    So pi pops out of reality, out of nature, not by accident but because the very possibility of a "physical relation" has some emergent invariant limit. It arises out of the broken symmetry that is a perfect orthogonality. :)

    Thus on the one hand, pi - as a position on the number line - looks the purest accident. Why should it have that exact value? On the other, pi is the identity relation when it comes to a limit notion of orthogonal dimensionality. We might as well just give its value as 1. Everything else that is less perfectly broken can be measured as some difference to that.
  • Thoughts on Epistemology
    The pain in my foot is not the same as the pain in my throat. The sensation in my back - is it a pain, or just a twinge?

    These "states-of-mind" share little more than that we use much the same words for them.
    Banno

    Yup. Either that or the same ontic categorisation of reality into self and world.

    My foot, my throat, my back ... and my pain.

    Things only get confused when I see you slice open your foot and feel something of your pain.
  • Do numbers exist?
    I said no such thing. Circles and numbers are abstractions. Limits have a technical definition and I would never use that word imprecisely in a mathematical discussion. This is not the first time you've quoted me as saying something I never said.fishfry

    I offered a statement to see how much you might agree with it. The clue was in the question-mark. So when it comes to formal precision, grammatical conventions appear above your paygrade.

    Surely I don't have to explain to you the difference between abstract and physical objects. You're just being disingenuous.fishfry

    So perhaps you can explain the difference. You might discover that it is not as secure as you want to pretend.

    Mathematical forms are real, they're just not physical.fishfry

    Yep. They're mental. Or something.

    Oh lordy.

    But of course electrons are right on the border between the physical and the abstract. I do understand your point that saying that physical things are "really there" is a stretch once we get into the higher realms of physics. Still, one can distinguish between a number and a rock, one being abstract and the other physical. Even you would agree to this distinction, yes?fishfry

    Right. So you accept that when we really get down to brass tacks - fundamental particles - suddenly all this idea vs reality ontology feels insecure. We are right on the border - of a different metaphysics.

    But hey, let's get back to the safety of classical atomist ontology. Let's go back to the world as we originally chose to imagine it.

    Sounds legit. No one could get confused about things at the level of everyday commonsense, could they?

    Oh lordy.

    Surely you can understand that my response was to someone claiming that the number pi proves that numbers are physical or have material existence. I'm not on any soapbox about the ontology of physics. I understand the traps therein.fishfry

    Hmm. But you "prove" that by claiming the reality of material being. And your view of material being is dependent on the fictions of classical physics - the world of substantial objects.

    So you are on a soapbox for sure. You are waving the banner for a particular notion of physicalism. And yet you agree also that this particular notion fails when you get down to brass tacks.

    A tad "disingenuous", no?

    That your point?

    Ok. I don't disagree.

    But the number pi is a lot different from a rock.
    fishfry

    Again, the point is that a ratio like pi and an object like a rock can be treated as if one is a human invention, a mere accidental notion, while the other is indubitably real in being physical and material. But that is just an ontology endorsing a sharply divided dualism.

    It is a highly subjective point of view in that you are happy to assign some objects to "the mind", other objects to "the world". And even the slightest questioning of this paradigm sends you into hyperventilating panic. It constitutes a personal assault.

    So I am concerned with better approaches to metaphysics. And the proper relation of the forms of mathematics to the materials of physics is central to that inquiry.

    Indeed, it has been ever since Ancient Greece.

    Wait!! It seems you agree with me after all.fishfry

    A little desperate there?
  • Thoughts on Epistemology
    Further, a statement is not a thing-in-the-head. The private object is avoided.Banno

    But isn't it a thing-in-the-heart according to you? Don't pretend to doubt what you believe in your heart, etc.

    The belief is made up of two distinct things, the private nature of the mind state, coupled with the public acts. The belief is not private, without the public part, we would not know that there was a belief. So don't separate the two.Sam26

    The issue here, as I see it, is that the mind side of the equation is the personal. The dichotomy of private vs public presumes a basic dualism - a strict divide between mind and world. But a dispositional and semiotic approach to knowledge would stress that even "access to the private self" is in fact a development of a personal stance. We form "ourselves" in a meta-representational sense by the very act of inquiring "what is going on inside me?".

    So the mental is just as much part of the construct as the noumenal. The beetle in a box metaphor is seductive but badly wrong. Well, at least it rides roughshod over the fact that introspective self-awareness is a culturally-taught and linguistically-structured skill. The interior nature of consciousness - the idea that it is another "world" - is rather an illusion on this score. Its truths seem secure, but they too depend on habits of interpretance.

    The self becomes what we have to produce to make the world real. And then saying there is a beetle in the box, a state of mind that "the self" privately perceives, is a recursive linguistic act. It is using the semiotic technology of language to isolate the "true self" from the public self.

    We are public creatures first in being social creatures. And then within the reality of that public language game, we are meant to discover our psychologically individuated "real selves". The possibility of private truths at odds with the public truths becomes a live issue as we develop a "modern Western rational attitude" to the nature of "our" phenomenal existence.

    So yes. There seem to be inward feelings and outward actions. And each appears to speak to its "other" in some crucial relational fashion.

    But you have to be able to credit a society or culture with a "mind", a dispositional attitude embodied in its language games, to see what gives our public acts their semantic truth or facticity. Our actions can be judged.

    And in counter fashion, we have to take the personal "mind" a whole lot less seriously. It is not some private reservoir of feelings or facts - the objects of a self-perception that then begs the question of, well, who now is this observing self?

    The private~public dichotomy implies a hard division - a metaphysical-strength one - between the psychological self and the social self. But really, selfhood is always emergent - the bit that has to be produced to make its counterpart of "a world" real.

    So language anchors social selfhood. And it anchors personal selfhood. On both levels, it is producing its beetles in their boxes.

    We can see how this does then create a distinction between public truths and personal truths. As Peirce argued, our best truth is the communal one - ie: the beliefs that a community of rational inquirers will arrive at as being the least doubtable in the end. But there is also the possibility of our local personal truths. To the degree that we might fruitfully be possessed by some individual goal or disposition, then we get to see the world "in our way".

    Well, something has to explain artists, poets and other entrepreneurs. There is a reason why now - as a society - a personal vision has become something to encourage. :)
  • Neither Conceptual Nor Empirical
    Words are not just labels; that is implicit in their not being just signs, and a large part of why the Peirce treatment falls short.Banno

    Where does Peirce say this? You are thinking of Saussure again.

    Ain't the sociology of philosophy amusing. AP has to demonise pragmatism/semiotics to secure its prestige. It can't afford for folk to realise that it is simply repeating what has already been worked out.

    If you don't read Peirce, then somehow you can't be blamed for not knowing better. You can think that a dispositional theory of truth leads automatically to metaphysical quietism - philosophy's Behaviourist phase! :D

    What counts is the interaction with the world - "And when you make the right sound, the food arrives" - that's the way words work, not as labels.Banno

    Uh huh. The dispositional theory of truth. The way Peirce fixed Kant's cognitive representationalism. The theme Ramsey might have really made something of. The theme that Wittgenstein then ran off the other side of the road to great acclaim.

    Nothing like a pendulum that swings from its one extreme to its other, eh? "We couldn't get logical atomism to work, so now we will believe its exact opposite."

    I'm really boggled by the proposition that values are nonconceptual and nonempirical. I'm wondering if Agustino is using some Humean version of empiricism. Because surely values are not something that is "seen" like we might see a chair, but I'm not sure the rest of this follows.Marty

    It's more subtle than that. Values condition our conceptions and perceptions. They are the purposes or dispositions that give shape to inquiry. So how we think of the world, and what we accept as its facts - ie: the truths we can measure - are informed by what we hope to get out of that way of looking at it.

    These values are at first implicit. They are the ground on which we stand to make a start. Then we turn around and see that they are what we had to inject into the process of inquiry to get it going. We "perceive" our values like we see a chair in forming a meta-belief about the "us" that is the self at the centre of a process of inquiry.

    So the OP was striving after a triadic relational view. The stool needs three legs to sit steady. But the relation has to be understood in terms of a developing or evolving process, not one that starts from any definite existence.

    The total sign relation has its three parts. There is the "self" that emerges - some habit of interpretation that is "us with our evolving dispositions or collection of values and purposes". Then there is the world - the good old thing-in-itself. And mediating the relation is the signs we form of the noumenal - our phenomenal experience.

    So buried in there, you have the essential Kantian insight. The mind has get started by making some abductive guess. But the Peircean approach recognises that purposes or goals are intrinsic to this getting started. The conceptual a-prioris are much deeper than some merely physical intuitions.

    And thus it is the self itself that is being developed in the forming of a sign relation with the world. It is not about a mind that already exists making sense of a world that is some unknowable state of affairs. Both self and world emerge from the more basic thing which is the attempt to relate in a fruitful or pragmatic fashion.

    I'm merely commenting on the notion that if we're defining empiricism in an old fashion sense then no such values appear to us in daily observation such that they are provided by external content. Values become a projection of our own mental capacities if we view the external world as being mere physical extended images. But such a view is untenable.Marty

    So it looks like we agree. The difference may be that the Peircean approach is grounded in phenomenology and then sees "the self", "the mind", as part of what emerges via a semiotic relation. Nothing exists in some brute fashion. Truth is intimately tied to the "self that has a reason to be asking that form of question". There is no truth beyond that. Truth-aptness depends on a self coming into being with its reasons. The "world" only exists as the empirical observations that would make these truths true.

    Thus it is all internalism. Almost idealism. Yet it is based on the ontic commitment of there being something "out there" worth modelling. It doesn't disbelieve reality. It just doesn't think that knowledge of reality can transcend the selfhood that has to be developed for there even to be "a view of reality".

    Kant's cognitive representationalism showed that "the mind" could not know reality directly. Peirce's dispositional relationism shows that even the mind is part of the construction. An image of the world wouldn't be possible unless a purposeful self, laden with values, was something that could develop due to the existence of a sign relation.
  • Do numbers exist?
    Yes but there is no such thing as a circle in the world. The circle whose circumference divided by its diameter is exactly pi is not any object that can exist in this mortal world of ours.fishfry

    So circles and numbers are the idealised limit of physical reality? They represent perfect symmetry and to "physically exists" means always to be individuated - a "materially" broken symmetry. Therefore mathematical forms are not real. There is only imperfect matter and its approximations of these forms - always inevitably marred by "accidents". Every physical circle is a bit bent. Any collection of things may be given a number, but no two things are actually alike.

    This is certainly a familiar ontological view. But it should be troubling that physicists are having such a hard time finding the "real matter" that is limited by these "unreal mathematical forms". Talk of this "mortal world of ours" is to accept a fundamental materiality to being which is proving only to be another idealisation.

    To make your position secure, you need "matter" to be something that physicists can actually put their hands upon and show to be real. As it stands, that is not the case. Instead - as argued by ontic structural realism, for instance - the formal aspect of nature seems the more real when it comes to the question of why fundamental particles exist.

    Materialism is in metaphysical crisis. So the old Aristotelian story on substance - the one that folk trot out to oppose Platonism - no longer works.

    The story is better flipped on its head. Limits are what produce individuated materiality. And without limits, you would just have "a world of pure accidents". A vagueness that is no particular kind of thing at all.

    So good old solid matter - when stripped of bounding form - becomes just a realm of "perfect fluctuation". Instead of being individuated and having efficient cause, it becomes a state of completely inefficient cause. :)

    Anyway, the point is that if mathematicians don't believe form to be real, well physicists are struggling to find matter to be real. And the best way out of that bind is to look to causality and treat that as the best definition of "physical reality". From there, we can see how limits and accidents make a nice complementary pairing. Limits reduce accidents. But accidents prevent limits being reached.

    Reality becomes a pattern produced by the suppression of fluctuations - a constraint on freedoms.

    Are numbers real? Well it is certainly true that our models of reality are social constructions. Epistemically, they are only "a useful idea". That is acknowledged in agreeing that we are modelling.

    However when it then comes to our ontic commitments as they arise from enquiry into nature, then we begin to appreciate that the materiality and individuation of the world is something we have too readily taken for granted. It just seems perceptually obvious that we exist in a world of solid objects - chockful of their own histories of material accidents. A substance ontology is what we experience, and any mathematical notions about form seem so clearly an abstraction produced by the creative human mind.

    But again, physics no longer supports this perceptual belief. It went looking for the real solid stuff that is matter and didn't find it. All it could find was fluctuations bounded by symmetries.

    Maybe it is time to believe the physics. :-O
  • Origins of the English
    Ah, no. I was talking about its prestigious cultural position, not its influence on English vocabulary.
  • Conscious decision is impossible
    To be in one's working memory, means that the person is consciously aware of that thing.Metaphysician Undercover

    Working memory is one step back from the attentional spotlight (granting that all these distinctions are somewhat crude and computational).

    So you can only have a definite working memory having been consciously attentive of something. But having it in working memory doesn't have to mean you are currently attending to it. It is only close at hand and being held as a distinct "snapshot".

    So if the person is able to hold six items in one's working memory, this means that the person is consciously aware of all six of those items at the same time.Metaphysician Undercover

    There is also iconic memory - https://en.wikipedia.org/wiki/Iconic_memory

    This shows how we can hold "a whole scene" in mind as an unprocessed sensory pattern before selective attention gets to work on it.

    So while I find the cog-sci approach clunky, the various component processes it identifies are based on solid experimental distinctions.

    If you want to talk about working memory being "conscious", that boils down to its contents being easily recallable, highly discriminated, and so generally reportable.

    The whole concept of "being consciously aware" is problematic as it imports an unwanted degree of binary definiteness into what is going on. It leaves us with little else except the claim neural activity is either conscious or unconscious. It is implicitly dualistic.

    Yet even so, it makes more sense to talk of working memory as being what we have just consciously attended and could easily bring back into attention. It is not the bit of the world - some particular viewpoint - that is our currently experienced one.

    Although as also said, attention itself can range from tightly focused to a very defocused and vague state. We can gaze off and not be thinking anything in particular. We can even switch to a deliberate vigilant state where we have cleared the decks to allow the unexpected to break through.

    So attention itself can be decomposed in a variety of ways that can be explained in terms of neurological structures or paths.
  • Origins of the English
    That channel - https://youtu.be/_iVdy0s8ARE - has good stuff. It shows how detailed the genetics is getting and how it can clarify the archaeology.

    So what is there left to debate? The interesting point could be the degree to which the mongrel English language may hold a cultural advantage in being in fact "ethnically cleansed".

    There is much rightful angst about the loss of indigenous languages as those languages are the living embodiment of a culture. A whole way of life is encoded in a shared language game. So to rob a people of their language is erasing their cultural identity.

    But by the same token, the loss of cultural specificity would be an advantage in becoming "modern". English is arguably the best language for developing new cultural and intellectual games because it carries less history. It has less concern for its ethnic purity - as opposed to French, for instance.

    Some say German is in fact a better language for thinking really complicated thoughts. And English could also be said to carry an awful lot of cultural baggage in its rich variety of primary sources. Claiming English to be the best vehicle for modern thought is also - I agree - a stretch. We could examine the merits of Esperanto. :)

    Anyway, there is a lot of interesting and new stuff here it seems to me in being able to use precise genetics to sharpen the questions we could have of social history.

    Those videos made me think why is there so little Roman blood in the British gene pool, and yet one was forced to learn Latin as a kid ... as it improved one's grasp of English, apparently. Or even Greek, as the Romans themselves needed that to have access to their cultural heritage, and a real Englishman ought to recapitulate that.

    Amusing really. The Poms maintained their own class divisions by learning how not to speak their native language. And even the languages of their dominant neighbours - frenemies like France and German - were pretty optional. What really defined the dominant class were the languages of their intellectual ancestors.
  • Conscious decision is impossible
    That’s more a measure of how many items we can hold at once in working memory. Each item needs to be processed serially or individually. That is why tests present you with a succession of items to be remembered.

    In computational terms, you are talking about the mental scratchpad used as temporary storage for what you want to keep close of hand. Attention is needed to fetch them back into close focus.

    You’ve mixed up that story with the other one which tests perceptual grouping. At a glance, we can see that there are one, two, three or then “many” of some object in a collection. If the objects are arranged - as a square, as a hexagon - we can then see the wholeness of the pattern and the number we associate with it. With a random arrangement, we would have to go back to some form of serial inspection.

    The take home is that cognition is hierarchical. Attention is at the top of the tree as the narrowest useful view. We only want a single viewpoint defining our state of mind at any time so as to “arrive at a decision” about what we are experiencing.

    So attention has to balance the conceptual possibilities in terms of lumping or splitting. It is a dynamical choice itself, not some fixed bandwidth spotlight. It can see the whole just as much as it can see the parts. It’s job is to find the particular perceptual balance at any given moment.
  • Philosophical Progress & Other Metaphilosophical Issues
    For me, progress would be best defined as moving away from subjectivity and towards objectivity. So the destination is the most general or abstract view of existence.

    But that viewpoint also has to be concretely historic. If existence is a product of evolution or development, then that makes the “truths” of cosmology and human mental evolution a core concern.

    So it is then no surprise both that philosophy has made constant progress as a culturally evolving endeavour, and that it is focused broadly on this question of disentangling the subjective and objective poles of being.

    The everyday difficulty is that people tend to then split into opposed camps, failing to see that subjectivity and objectivity are complementary directions of intellectual progress.

    So contradicting what I first seemed to say - objectivity is the goal - intellectual progress also includes a contribution to sharpened notions of “being a self”. The contrast of aiming for objectivity brings with it a balancing cultural focus on the issues of personal individuation.

    We see this from the Socratic invention of self-actualisation and Ancient Greek theories about democracy.

    And - of course I would say this :) - pragmatism is the philosophy which offers the right kind of balance between the complementary extremes of subjectivity and objectivity. Peirce fixed the solipsistic cognitivism of Kant in particular by starting in phenomenology and deriving an idealist objectivity.

    AP and PoMo represent philosophical failures insofar as each tends too far towards one or other pole in unbalanced fashion. Sticking closer to a scientific and historical path finds philosophy achieving its most actual progress.

    (But failure has value too. We need to know what doesn’t really work.)
  • Conscious decision is impossible
    I don't see what is the problem.bahman

    I get the impression you believe nature is Newtonian deterministic and therefore free will becomes a problem. But that is a limited view of causality even within physics these days, let alone neuroscience.

    I am talking of a view of brain function where it accumulates many degrees of freedom - all the many things it might concretely do (and so also, not do). And then attention acts top down to constrain or bound these freedoms in useful, goal achieving, fashion.

    So free will is just rational choice, voluntary action. There is a vast variety of things we could be thinking or doing at any instant. We accumulate a vast store of habits and ideas - concrete skills and notions. Then we must constrain this huge variety of possibilities during every conscious moment so that we limit ourselves to thoughts and actions best adapted to the needs and opportunities of the moment.

    To speak of free will is really just to note that we have a socially constructed sense of self that lies over our voluntary behaviour - another level of filter to bound the possible variety of our behaviour. We can consciously weigh what might best suit us personally against what might best suit some wider communal identity we participate in.

    So a constraints-based causality avoids the philosophical problems that a physical determinism would seem to create.
  • Conscious decision is impossible
    I believe that there is a doer which can initiate or terminate a chain of causality otherwise there is no free will.bahman

    But apparently you also believe you can drive unconsciously, and that consciously you are only aware of a single thing. So how does it all fit together for you if you reject a more scientific view?
  • Conscious decision is impossible
    When we say the river flows, is there something more than the water and the channel carved over time?

    The landscape certainly has developed a habit. We can give a name to the dent in the ground that usually has water draining down it. But do the Volga or the Elber exist over and above the particular drainage function they have in their settings?

    There is more to the identity of an individual brain, an individual psychology. But the basic point is the same. If we can discover a functional description that seems a true explanation of what we observe, then that is when we should be wary of the reification - the habit of language - which then demands we turn a process into an object, a verb into a noun.

    If you speak of some doing, it is the rules of grammar that insist on the presence of some doer. Yet you just described the doings in a functional way where there is no object, just a process.

    So again, do you believe a habit of language and insist there is some missing doer? Or do you believe the functional description that looks to have included all the causality you could find? A process is just a process. Giving the process a name doesn’t mean there is now the further thing of some object standing behind all the actions of the process.

    “Oh no! The Volga flooded and washed away the village. Why did it decide to do that?”

    “Oh no! Brahman decided to pick the hazelnut whirl rather than the Turkish delight from the box of chocolate All Sorts. Why did he decide to do that?”

    Grammar wants us to think about things a certain way. A functional or process view - the one science is seeking to take - is the attempt not to get sucked in by the usual games of language.
  • Conscious decision is impossible
    You seem to be working with a homuncular notion of awareness. Language demands that we speak of the “I” who is the self behind every mental doing. And so when we are attending and consciously deciding, there is this elusive “we” now apparently an extra part of the picture. We lose sight of the fact that this we-ness is part of the process, part of the construction, part of the action. It describes the fact that the brain was doing something, and that included taking a point of view, and a point of view implies “an observer with a choice”.

    So you seem to accept functional talk. There is what it is like to be behaving habitually or to be behaving attentionally. However you also want to assign a further identity to the doer of any doings. Language demands that there be an efficient cause. And you believe grammar more that you believe psychological functionalism.
  • My doppelganger from a different universe
    Ah, so if they are entangled, we wait until they are disentangled? Eventually there is the one Bob measured and the one Alice measured? Except now we don’t know which Bob and which Alice in which world branch as we have just duplicated them under MWI.

    Sounds legit.
  • Thoughts on Epistemology
    I see you have no plans for this to go anywhere. But anyway, I’ve already spelt out the difference between Bayesian expectation and propositional structure here.
  • Thoughts on Epistemology
    So my cat doesn't have beliefs? Or is her scratching at the door a statement of a belief - just not a linguistic one?
  • Thoughts on Epistemology
    And a disposition would be some preceding metal state.Banno

    And yet a disposition to act "causally in the world" is critically different from one to act "in the realm of truths and facts". So to call them both "mental" - or even metal - would be the matter in question.

    Does one actually serve as bedrock to the other? Or does each have a different ultimate bedrock?
  • Thoughts on Epistemology
    First, I did not say that animals reason, but of course I'm using reason as something that takes place in language.Sam26

    That's a quibble.

    You can define reason as a linguistic act. But animals have been observed to reason in terms of working out how to solve some real-life problem. Even a jumping spider can scan a scene and work out how to creep around behind its prey so as to drop down on it. So broadly speaking, animals can "think things through" in a causally efficacious sense. The normal usage of "reasoning" is broad enough that you will in fact have a problem insisting on your narrower definition. And I was only trying to bring this out in describing your position as accepting "animals can reason in a causal fashion".

    I also did not say anything about causal knowledge, in fact, I said just the opposite. Knowledge is based on certain causal beliefs. I do not even think there is such a thing as causal knowledge.Sam26

    That's another quibble so far as I'm concerned.

    But then I don't believe in "knowledge" as justified true belief. I only believe in knowledge as justified belief. Truth is a rather redundant term for the pragmatist, as uncertainty can never be completely eradicated from any state of belief. (A separate argument perhaps.)

    I do not understand this. I would not say that evolution sorted out epistemic rules, what does that mean? It sounds like you are giving evolution an intellectual basis. Maybe there are certain causal laws that dictate certain outcomes, but rules imply something else for me.Sam26

    I doubt I could put it more plainly.

    Evolution produced nervous systems that were up to the task. They embodied epistemologies that worked.

    You now seem hung up on the word "rules". Clearly I'm using it in a loose sense - one that imagines biology to be implementing some kind of "program" for understanding the world. It should be equally obvious - in that I'm taking an embodied/enactive/ecological stance on animal perception and cognition - that that is only then a metaphorical use of the term "rules".

    In fact, given my whole bleeding point was that rules - syntactic structure - are a product of the informational realm of being, the underlying word-play should be clear. Actual rules are the last thing you will find in the biological organisation of the brain. Or in nature generally.

    So my use of the word "rules" ought to have a usefully ironic ring to it in this context. Having just highlighted the actual rule bound nature of speech acts - the reliance on "unnatural" syntactic structure - I then said, so far as biological level cognition goes, evolution then sorts out its epistemic "rules".

    But I didn't use scare quotes because I didn't expect your turn of mind to be so constantly literal.

    Thanks for the response Apokrisis, that took time to write out.Sam26

    Maybe now you can address my actual point - that the epistemology of syntactic speech acts may have a very different bedrock than embodied cognition.

    One is fundamentally subjective. The other, I'm saying, aspires to fundamental objectivity.

    Much mischief is done in "theory of truth" circles because the dichotomous, or complementary, nature of this division is not properly recognised.

    How could Turing have so impressed people with his theory of Universal Computation? Why did folk feel so convinced by Platonic idealism or logical atomism?

    It just seems obvious that reason can grasp at some fundamental objective principles that are "beyond nature". And is the failure then to be able to completely secure them an actual failure?

    These are the kinds of questions which are really bedrock to that other aspect of our being.
  • My doppelganger from a different universe
    No two electrons in the universe can be in the same quantum state - they are fermions, remember.tom

    If they are entangled, do you think you can say which one is which? Is that A over there, and B over here, or vice versa?

    Of course MWI "solves the problem" as ever. :-}
  • Thoughts on Epistemology
    Second, not only are there beliefs that arise non-linguistically, but our thoughts are also not dependent upon linguistics. This it seems, has to be case if one is to make sense of the development of linguistics. For if there are no beliefs and no thoughts prior to the formation of linguistics (language), what would be the springboard of language? How does one get from a mind of no thoughts and no beliefs, to a mind that is able to express one's thoughts linguistically? It also seems to be the case that language is simply a tool to communicate our thoughts to one another, which also seems to lend support for the idea that thinking is prior to language.Sam26

    I pretty much agree with the rest of your post, but this step is suspect I would say.

    Of course it all depends on how you define thinking. As you say, animals can reason in a causal fashion. Brains are evolved for that kind of Bayesian inference. Certain bodily actions will predict certain experienced outcomes.

    But language is the enabler of what we really mean by thinking - cultural ideas giving a symbolic meta-structure to individual psychology. I can see a tree as a "tree", together with all that flows from that given a structure of cultural belief. And paying attention to a particular tree will result in at least the urge for some comment - a speech act that expresses that cultural belief as some syntactically organised proposition.

    So an animal will see the same tree and - in attending to it - will start "thinking" in terms of relevant acts of orientation and motor response. That is just the way the brain is wired. Attention "loads up" the "output" side of the brain. It causes thoughts about what to do next, or what might come next. So the animal might start scanning the tree for ripe fruit, as it recognises the sight of a fig tree. It might start to cringe and be ready to run, recognising the tree to be the one likely to conceal a leopard sitting up on a branch.

    This is the kind of bedrock epistemology you are talking about - inference based on embodied experience in a world.

    But humans have added a third kind of automatic reaction to whatever falls into the spotlight of attention. We start to form some sentence. We get ready to speak about the thing. Focusing on the tree, we will already be having the same orientation and motor preparation thoughts - hmm, figs, whoah, leopards. But we then have the third unique motor act which is also now an informational or symbolic act. We get ready to make an utterance. And utterances have a grammatical or logical structure.

    Of course, early human responses probably wouldn't have seemed particular rational or philosophical. The utterances that would have sprung their minds, or even been verbalised, would be judged rather matter of a fact, or perhaps a little mystical or customary. That just argues that modern human civilisation has developed a much more overtly logical and rationalising frame of mind. Speech acts are constrained by more careful rules - on the whole, depending on the company we keep.

    So the point is that speech acts did from the start mark a departure point for Homo sap. On one level, it was just the addition of another kind of motor response. See tree, make a noise. Or even if you don't make that noise, automatically you start to think it - feel the urge tickling your throat - just as much as you feel your hands starting to shape so as potentially to climb it, or your taste buds start tingling in preparation for sweet figs.

    But that nascent motor act is also a nascent symbolic act. The syntactical utterance could start to have a semantic meaning. In epistemically dual fashion, the mind of Homo sap was both a biological inference machine, living in a bedrock causal flow of embodied action, and also dwelling in this new realm of cultural belief. Social information was structuring the Homo sap mind. And that has now a different epistemic basis.

    It depends on the bedrock of embodied causal being, but it is also - by design - increasingly detached from it. It wants to be separate, so as to now make possible a human realm of narrative, of fiction, of science, of art, of religion, etc. It wants to forget the bedrock roots of all thought and awareness - the embodied animal condition - so as to be free to invent whatever it finds useful at a cultural level of semiotics.

    I think this makes a big problem for your desire to secure epistemology in bedrock causal knowledge. Yes, that is the bedrock of our mental being. But also, the other aspect of our nature is now the linguistic and informational one that has the aim of transcending this very groundedness. Cultural belief is always demanding to be cut free of what it sees as mundane reality, allowed to go wherever it likes.

    Of course, this assertion of symbolic freedom is problematic. It does in fact still need an epistemology. There are reasons for rules of grammar, rules of thought, rules of reasoned inquiry. There is a best way to use our linguistic freedom - arguably. So we can't just use the epistemology of the bedrock causal view as the guide to how language should "rightfully" operate. There is a reason why "theories of truth" are of such philosophical concern.

    Biology and evolution sorted out the epistemic rules for an animal level of cognition. The epistemic rules for linguistically-structured thought could be another whole ball-game. I would certainly argue that their bedrock seems "mathematico-logical" for a good reason.

    It feels like that instead of looking downwards to our totally subjective biological embeddedness - holding up one hand, then another; or kicking at stones - we should be looking upwards to what it means that we could also be "completely free" within the bounds of some "objective rational attitude" to existence. Where does language - syntactically-encoded semantics - have its real ontic home?
  • Conscious decision is impossible
    Even so, we can be conscious of a decision. We can attend to a choice presented to us. The choice could be whether or not to hit a button. The choice could consist of a whole panel of buttons, as in a vending machine.

    So yes, attention is a thing. It narrows our focus on the world, or even out thoughts, by suppressing whatever seems extraneous. So attention itself involves a decision. It is the choice not to be focused on anything else at some moment. And that choice could exclude a vast range of other possibilities already.

    Then conscious of some particular area of action or choice, like the bounteous variety of a vending machine, we might narrow our attention still further to the Mars bar. And even then, there is the choice to buy it, or not.

    If buying the bar is our daily habit, then we could just hit the right button with little attention. There is also habit or automaticism. As much as possible, we want to make our choices in a learnt and routine fashion. Attention is there to deal with choices and decisions that are surprising, novel or significant.

    The fact that attention is a narrowing of awareness - an active exclusion of many alternatives - is the feature, not the bug. It is how we avoid just acting out of unthinking habit, even if mostly we want to learn to act out of unthinking habit.
  • Conscious decision is impossible
    Everybody can only focally be conscious of one thing at a time.bahman

    So we could decide on whether or not to do it? We have two choices at least?
  • Neither Conceptual Nor Empirical
    If you are going to blather on about folk, you ought to at least spell their names right.

    And this Osho ... have you been a fan of him long? Doesn’t really seem to be your usual sort. You think his life was some kind of shining example, eh? Tell us more. :D
  • What is the difference between science and philosophy?
    Philosophy tolerates a remarkable amount of bullshit rationalisation. Science tolerates a remarkable amount of bullshit measurement.

    Put the two together and it still works.
  • Neither Conceptual Nor Empirical
    I'm concerned to hear about your eyesight, but you seem to be able to read your screen somehow, so as a reminder....

    Ramsey’s criticisms of Wittgenstein, I shall suggest, had an impact, as did his alternative. That alternative was a kind of pragmatism. By 1926 Ramsey was a full-on Peircean pragmatist. In the crucial time 1929–30, the last year of Ramsey’s life, when he and Wittgenstein were together in Cambridge and before Wittgenstein turned his back with finality on the Circle, Ramsey transmitted that Peircean pragmatism to Wittgenstein.

    Moreover, I shall argue that Wittgenstein adopted, circa 1929, Ramsey’s pragmatist position on generalizations and hypotheticals, and then went on to extend Ramsey’s pragmatism to everyday
    beliefs. But while Ramsey also extended pragmatism to all beliefs, he would have objected to the particular direction Wittgenstein took pragmatism, had he lived to see it.

    My final suggestion will be that Wittgenstein in turn planted the seeds of pragmatism in the Vienna Circle, preparing at least some of them to explicitly turn to pragmatism.

    https://jhaponline.org/jhap/article/view/2946/2607
  • Neither Conceptual Nor Empirical
    Who said you were old and set in your ways? :)
  • Neither Conceptual Nor Empirical
    What? This time you plan to read it?
  • Neither Conceptual Nor Empirical
    Nice.Banno

    Haven't you read Cheryl Misak yet? Peirce (via Ramsey) was the one who showed Witti the way out of the bottle of logical atomism.

    Really, you guys just keep cracking me up! :)
  • Neither Conceptual Nor Empirical
    Regardless, this is precisely the kind of Scholastic quibbles that are actually irrelevant to value. All you see is empirical and conceptual things, and you call that truth. You even try to subjugate value to empirical concernsAgustino

    I realise you need to make this come out right for transcendent Christian metaphysics. But that's your loss. Wake me up when you are tired of being a historical curiosity.

    I was only correcting your poor understanding of Peirce anyway.
  • On Doing Metaphysics
    Well, you could... I don't know - maybe explain better.T Clark

    The argument is over what kind of mathematical relation defines a logical dichotomy - a dichotomy being a relation that is mutually exclusive and jointly exhaustive.

    MU wants to treat is as simple negation. A and not-A. The presence of some thing, and then its absence or its erasure. But that is question-begging as it doesn't go to any mutuality that could form the two poles of being, nor to the way the two poles then demonstrably exhaust all other possibilities.

    So a dichotomy is about taking a difference - an asymmetry or symmetry-breaking - to an extreme. It must begin in sameness and wind up looking orthogonally opposed. You don't just have chance and its absence, you have chance and necessity - an opposition of two poles of being that then encompass everything else that could be "somewhere in-between" these complementary extremes.

    So likewise every metaphysical-strength category. You don't just imagine discreteness and its absence. You can only imagine discreteness in terms of the absence of something else, its exact opposite of continuity. Stasis makes no sense unless understood in terms of being antithetical to flux. Oneness is not a meaningful concept except to the degree it contradicts multiplicity.

    Then seeking a mathematical model of this relation, the best understanding is an inverse or reciprocal one.

    MU's weak-arse negation is like addition and subtraction. Count up three places, then erase those three places to end up back where you started. It is like a mirror symmetry. Flip the image over to break the symmetry. Then flip it again and you are back where you started. It is a symmetry-breaking, but nothing much has really changed as it is so easy to return to unbrokenness by a single step reversal of your path. An A-sized step gets negated by a second A-sized step - just now in the other direction.

    Mathematically, it is the symmetry-breaking of a zero. 1 + -1 = 0. It is about the least amount of symmetry-breaking you can get away with. It is the symmetry breaking that remains as close to nothing actually happening as possible.

    A dichotomy then represents the opposite end of the symmetry-breaking scale - one that is as extreme or asymmetric as possible. And a reciprocal relation models this well as each move in one direction causes a matching move in the other. If one end of the relation grows, the other actually shrinks to the same degree. Two poles of being are in play, each acting on the other in mutual and exhaustive fashion.

    Now the mathematics is a yo-yo around 1, not 0. It is a relation anchored on an actual unity - a foundational sameness - that then gets broken in two complementary directions. Hence it is a triadic or developmental relation being modelled.

    So consider the development of a reciprocal in the form of a fraction - a numerical inverse.

    We start with 1. This 1 is 1/1 (Aha, the latent symmetry breaking which so far has changed nothing!) Then we get 2, and so 1/2. Then keep counting. We get 3, and thus 1/3 as its reciprocal. Guess where this is going next. We get 3 and its formal inverse, 1/3. Every time one number gets bigger, it forces its partner number - anchored by this particular form of opposition - to get smaller. The values are being driven apart.

    Extremetise the relation and we get infiinities and infinitesimals. The infinitesimal is 1/infiinity. The infinite is 1/infinitesimal. Every actual number - fractional or whole - is then contained within the limits of this canonical relation. The infinite and the infinitesimal emerge as the limits on the breaking of the symmetry represented by the ur-somethingness of the 1.

    A relation has to relate things. A self-relation is tautologous. Just counting up or down is simply to add the minimal claim that "a something" exists to break the ultimate symmetry of a zero-ness. There is at least 1 thing now, and you can then imagine 1-1 to recover the initial symmetry from which this one-ness must have mysteriously arisen, or 1+1+1+1... as the operation to keep breaking this zero-ness in the vain hope of finding its other limit.

    You can see all the usual metaphysical dilemmas that flow from this sound of one hand clapping. How did something arise from nothing? How could we have creatio ex nihilo?

    But a reciprocal/dichotomistic logic derives complementary limits of difference from an initial absolute sameness. Now we do start with something - but it an undefined oneness, a vagueness, a firstness. It is as much everything as it is nothing. It needs no stronger definition than the claim that it is a unity, an unbroken symmetry.

    And then we can imagine a fundamental division in mutually definitional directions. If this symmetry starts to show some discreteness, some discontinuity, then matchingly, there is the new-found definiteness in the continuity that it claims to be moving away from. If the action reverses its course, it will be heading back towards its actual opposite, not simply negating its existence.

    If we say something is becoming more fractional - 1/3 is now 1/333 - then it is not just shrinking towards nothingness as one of the limits on oneness. It is moving ever further away from its own inverse, 333/1. It is expressing its tendency towards infinitesimality in terms of the countering possibility of the infinite.

    So it boils down to monism vs triadicism.

    Monism claims there is either nothing or, instead, the one thing. (So it is in fact reliant on a metaphysical dichotomy, but understands it as a dualism - a simple presence vs absence distinction).

    Triadicism fixes this by seeing presence and absence as relative to the third thing of a vagueness or apeiron. There is the unity of an unbroken symmetry which is neither A nor not-A. The principle of contradiction does not yet apply. 1 = 1/1. And turn 1/1 upside down, multiply it how you like, and you see no difference.

    But as soon as you allow the possibility of a difference, a symmetry-breaking, then you get a separation to opposing poles of being. If you can have 2/1, then you can have 1/2. A single step now causes a break in the actual scale of being. Growth is matched by shrinking, not merely by not-growth. The difference is a real one, not merely the unplaced notion of one hand clapping - an event with no context by which to measure itself against.
  • On Doing Metaphysics
    Sigh. What can you say when folk can't get the difference between a mirror symmetry and an actual symmetry breaking?
  • Neither Conceptual Nor Empirical
    His picture-theory is a 'correspondence' and while he doesn't really offer a solution, I like this:TimeLine

    I think Wittgenstein proved quite definitely that the idea of an isomorphism between language and reality, or that language can act as a picture for reality is nonsensical.....

    Pierce... the fly trapped in the bottle
    Agustino

    You folk must be thinking of the dyadic semiosis of Saussure and not the triadic relation of Peirce. Big difference.

    Value would be instantiated in the Peircean sign relation as the very purpose embodied by a relation. It would be the reason for the relation to even be. Hence ... pragmatism.

    For example, seeing red - the ability to make a sharp discrimination of hue in this particular part of the visible light spectrum - is of ecological value to a primate. Clearly so, as colour vision was first lost (in a nocturnal ancestor) and then re-evolved (as later primates became diurnal foragers again).

    For linguistic humans, red can come to be a higher level cultural symbol of something. It can come to stand for blood, or danger, or arousal. So the sight of redness then mediates for a cultural value. We see something further in the presence of a daub of red lipstick or a red warning light.

    The dichotomy of factual vs conceptual, or empirical vs grammatical, is about "cold rationality". So it is about the learnt human habit of excluding subjective feeling so as to maximise the advantage of objective, disembodied, reasoning.

    There is pragmatic value in going up another level in terms of semiosis, leaving biology and individual psychology behind and becoming more purely the creatures of a rationalising or scientific culture.

    So value is still embodied in the relation. It is the whole point of the deal. It is the taking of a view which yields some advantage. But the rational ideal is one that is "dispassionate" in leaving behind the subjectiveness of our biological selves, and even our traditional social selves, so as to rise to become this "totally objective" self who now values some new set of ideal things .... like beauty, good and truth. :)

    Value never disappeared from the equation. It was just culturally reimagined in a way that feels pretty damn elusive to us biological creatures.
  • Where did this insistence on methodological/disciplinary, cognitive/intellectual purity come from?
    Heh, heh. If it wasn't obvious already, some methods of thought are better than others.
  • On Doing Metaphysics
    Maybe a diagram would be easier....

    + | -

    vs

    +| ------------------------------------------------------------------