• A New Paradigm in the Study of Consciousness
    I think you focus too much on entropy, when what is obvious is that self organization is progressing.Pop

    Well you can’t even be listening to what I’m saying then. I’m arguing the biosemiotic position that is now constructed on the basis of dissipative structure thinking.

    Everything that exists, exists as an evolving self organizing system - a self in the process of accumulating / integrating information. This is the machine.Pop

    Life requires an epistemic cut between rate independent information and rate dependent dynamics (Pattee). Life is thus a modelling relation (Rosen).

    Thus there are formal reasons for rejecting these assertions.
  • A New Paradigm in the Study of Consciousness
    And, ultimately we are a self organizing system.Pop

    Not so. Life is something extra in being able to apply the logic of machinery to the entropic world. Nature has no machines. Life began by being able to apply the mechanical trick of a Maxwellian demon. Life can build the gate and operate the switch that directs random dissipation towards its own existential goals.

    We call it self organisation when it is physics being organised by its own boundary constraints, but there is no local selfhood involved. There is only local randomness and accident. Life adds mechanical order and that is another further trick which is quite novel.
  • The Federal Reserve
    The real story is Bretton Woods and how the US agreed to fund the recovery of the post WW2 broken colonial empires by making the US dollar the world currency. Nixon then abandoned the gold standard and the US could turn the tables by becoming the world’s creditor rather than its banker. The Fed just administers this rigged game. Endless money creation is possible because the debt had to be bought by every other currency.

    Some day the worm may turn. The IMF might have the nerve to push through a world Bitcoin. Or maybe we will crash back to gold - China and Russia have built their stocks. However the US has done a good job at creating a hostage situation where any unwinding of dollar hegemony becomes mutually assured destruction for all concerned.

    High finance is a political long game. The Fed runs the front office of the US dollar monopoly.
  • A New Paradigm in the Study of Consciousness
    There is so much misinformation and as a result confusion about entropy, when all natural systems are dissipative systems: dissipation is a necessary element of their self organization.Pop

    Yes. The argument is that open and “far from equilibrium” thermodynamics is the generic case, a closed and gone to equilibrium system is the particular case. The Cosmos itself is a dissipative structure.

    But then life adds something further in being able harness dissipative flows to its own advantage. The physicochemical realm self organises to produce entropy. But that flow is often blocked. Sunlight falling on bare rock reflects back into space cooled considerably, but still about 50 degrees C. Life can add itself to that gradient and cool the radiation to 20 degrees on average.

    So it makes a small contribution to cosmic dissipation. Yet the slight edge still supports the Gaian splendour of life as we know it.

    The rate of space creation in the universe is greater then the rate of entropy creation, so as a percentage of the total space, entropy is decreasing. This permits "order", where order is created by self organization, which relies on information integration.Pop

    The adibiatic account fell out of equilbrium with the electroweak symmetry breaking and left some catching up to do. But Lineweaver paints a nice picture of the balance being restored by the Heat Death when all matter is swept up in evaporating black holes and returned to radiation. The final state is a de Sitter universe composed of its holographic boundaries radiating virtual photons with a temperature within Planck distance of absolute zero.

    So all levels of this dissipative structure story can be aligned. The bit I am focused on is how life inserts itself into the story as self interested information as opposed to the disinterested information that is holographically organising the whole show.
  • The Mathematical/Physical Act-Concept Dichotomy
    LOL. Continental philosophy campaign for ordinary language and commonsense thinking. Perhaps it could start closer to home. :up:
  • A New Paradigm in the Study of Consciousness
    Are you referring to the "fine tuning" of the universe - that forces all systems in ordered pockets of the universe to self organize by integrating information?Pop

    I don't know exactly what you mean by that. But criticality and spontaneous symmetry breaking are concepts used to show that fine tuning is not such a big deal. If the direction of symmetry breaking is essentially random, then the appearance of being a particular choice is explained away by it being a meaningless accident.

    Self organisation as a result of entropic force is another related thing - dissipative structure theory.

    My point is that life is certainly founded in dissipative structure. Biology pays for its negentropic existence by constructing channels for accelerating the entropification of the universe.

    And then to be able to do that construction using information, the flows it directs must already be on the point of physically tipping.

    It would take too much energy as a farmer to move a herd of dead cows from one paddock to another. But very little energy to stand at the gate and shut it behind a herd of live cows as they eventually wander through.

    So the contrast is between a world of static objects that somehow gets organised by the stored information of genes, neurons and words, and a world that is already in random motion and so all that is required is the intelligence of a Maxwell demon slamming the door shut on any fluctuations in the right direction.

    Life is going to set up its informational camp where the living is easiest. And that is why physical criticality - chemistry just itching to happen in some random direction - is a natural foundation for biology. Any reaction that is poised and tippable is obviously in want of a good tipping. And that is what tossing an enzyme into the mix does.
  • The Mathematical/Physical Act-Concept Dichotomy
    There’s a good essay here on how the detail matters….

    Is Peirce a Phenomenologist? - https://arisbe.sitehost.iu.edu/menu/library/aboutcsp/ransdell/PHENOM.HTM

    Note the Cartesian roots of phenomenology and its steady retreat into the primacy of subjectivity. Peirce argued for the irreducible triadicity of a semiotic modelling relation and an expansion of that from a statement about epistemology to a story of ontic and cosmic generality.
  • A New Paradigm in the Study of Consciousness
    Of course I'm referring to the physical phenomenon, not the mathematical techniques for modeling it.Enrique

    So in what way is a brain wave the same thing as a quantum wave? And what way is either like a ripple on a pond?

    A brain wave is a general way to talk about patterns of excitation~inhibition in neural circuitry. The circuitry is presumed to be doing the functional task of the integration and differentiation of information so as to meaningfully model a self in a pragmatic relation with its world. And so a global measure of changes in voltages at the scalp says something about the general statistical degree of coordination versus isolation in relation to types of brain task.

    In other words, the brain is being modelled as some kind of computer. Waves - as various degrees of EEG signal choppiness - are an attempt to glean some kind of insight about the nature of the software routines from the crackling sounds the hardware happens to be making. And even then, waves - as a measure of coherent dynamical simplicity – don't tell the story. A sine wave makes a clean theoretical baseline for measuring an EEG recording's departures from such a state of ultimate mathematical simplicity. At the other end of the spectrum is the chaos of 1/f white noise - a mathematical model of incoherence of wavelet fluctuations occurring over all spatiotemporal scales.

    So - unlike the way you are using the term - neurobiology has some maths in mind. The kind of maths that can ground experimental measurements. It is saying that if the system in question is extracting meaningful work from patterns of neural excitation and inhibition – a hypothesis amply supported by the structural anatomy - then we can use the opposing extremes of noise, monotonous sine wave vs 1/f chaos, as the contrasting bookends for a global statistical measure of the brain's activity at some sampling window in time.

    EEG recordings were great back in the 1950s when there wasn't anything better. But the popular understanding got stuck at the alpha/beta/theta brainwave level of classification. The hippy dippy shit. It wasn't interested enough to follow along to the P300s and N400s that could be used to impute something about the real dynamics of the brain's information integration.

    Classical waves are a similar story. In this case, the mathematical bookends were the mechanics of classical waves vs the mechanics of classical particles. If you had some classical phenomenon like a ripple on a pond, you could imagine it as an atomistic collection of points that was to some extent or other glued together by a pattern of attractive-repulsive forces. There was enough viscosity in the system to produce a collective behaviour that again varied somewhere between the Platonic ideals of the simplest symmetry of a sine wave and the maximum complexity of chaotic turbulent disorder.

    So the world at a classical level of observation and modelling became measurable because the wave concept was reciprocal to the particle concept. You could place physical systems like ocean surfaces or even desert sand dunes somewhere on a spectrum between collective coherence and individual independence in terms of classical measures of mass and force.

    Then you have talk about waves - and their lack - in yet another setting. Quantum mechanics. Discrete particles also seemed to act like continuous waves, and vice versa. An electron was like a wave. A photon was also like a particle.

    This observational surprise was formalised by quantum field theory where the 'waviness' of a wave function was built into a calculus of evolving probabilities. This created its own mess about how the following of "every possible path" then actually got collapsed or decohered to result in a classically defined state.

    So the wave function description of fundamental nature ended up being both fantastically successful and radically incomplete in a way science wasn't exactly used to. I could go on. But my essential point is that we talk about waves for a particular reason, just as we talk about atomic points. They are meaningful to the degree they are Platonic mathematical objects we can drop into our theoretical frameworks and start making bookended measurements.

    So we have a sine wave in mind as the simplest combination of a translation and rotation - the cannonical symmetries (or energetic symmetry breakings) of classically-imagined spacetime. The generator of a sine wave - as a continuous trajectory - is a point marked on the edge of a rolling wheel.
    Then from this useful model of greatest vibrational or resonant simplicity, we can look to the "other" that anchors the other end of the business of scientific measurement.

    That is all there is to the "magic" of a wave. It is a shape so simple that nature can't help but start from that geometric motif, just as the notion of a zero-D point is also the conception of an ultimate state of discreteness or incoherent discontinuity. The idealised fluctuation.

    And this is why EEG studies and quantum theory can both be talking about waves without thinking of those waves as some kind of concrete material substance. They are just one end of a useful modelling dichotomy. A logical foundation from which to mount the experimental assault.

    So you say you are referring to the physical phenomenon here. Brain waves = quantum waves = somehow or bloody other, consciousness. And that is why I say you just freely abuse these concepts to create a hand-waving charade of explanation.
  • The Mathematical/Physical Act-Concept Dichotomy
    I don't think Peirce is so different from all those continental thinkers anywayGregory

    You could say the departure point was similar to Kant and Hegel. But he saw that the obvious project was to fix their shaky conceptions of logic and so cement what could even be meant by ontological structuralism, or the systems/process philosophy view.

    Phenomenology has been developing since Kant.Gregory

    You mean it has become increasingly inured to the failures of its early ambitions? :smile:
  • The Mathematical/Physical Act-Concept Dichotomy
    No wonder you think it has nothing to offer the understanding of the application of reasoning.Joshs

    I'm thinking that because you are failing to show how it has. You are welcome to start showing any time soon.
  • A New Paradigm in the Study of Consciousness
    I’ve read those books. I even argued the issues with McFadden when he was first pushing an EM field story in the 1990s. The sort of nonsense you are peddling was done to death back then. Meanwhile science has rolled on and found where biology actually does exploit quantum loopholes to allow hyper efficient semiotic control over the energetic basis of life.

    So I contrast the two. The abuse and use of quantum physics. It is very easy to tell the difference.
  • The Mathematical/Physical Act-Concept Dichotomy
    Phenomenology compounds the Cartesian error by building up the barrier made between mind and world. Semiotics instead breaks it down as epistemology is made ontology. A commonality of rational structure is claimed, and can also be tested as a model of reality.

    And I’m not seeing how phenomenology contributes any interesting comment on this particular OP or the general application of reasoning.
  • The Mathematical/Physical Act-Concept Dichotomy
    Out of curiosity, why is it a dilemma?kudos

    Because folk see dichotomous opposition as a logical contradiction rather than a relation of logical reprocity.
  • You Are Reaction Consciousness, A Function Of The World
    I still have no idea on which side of any argument you stand. So I’m out.
  • A New Paradigm in the Study of Consciousness
    Much of that is speculative, but not at all farfetchedEnrique

    It’s just quantum nonsense, an abuse of terminology rather than a concrete conjecture. Nothing to see here.
  • You Are Reaction Consciousness, A Function Of The World
    Reaction is a response on a biological and/or behavioural level to an affect,boagie

    So the reaction is to an affect rather than the world as the thing in itself? Are you making my point that perception is a triadic modelling relation rather than a Cartesian representation?

    The world affects, the subject is effected/reacts.boagie

    The world affects what exactly? Again, every moment I am being “affected” by any amount of physical information. But most of it I don’t notice or remember. Something more must be going on than the old cogsci “neural data display”.

    Your subject matter here is psychological science. It is unclear how your use of the term “reaction” either agrees or disputes any particular theoretical position.
  • The Mathematical/Physical Act-Concept Dichotomy
    but those presuppositions remain unexamined by it.Joshs

    Nonsense. The history of physics shows a continual revision of the suppositions in exactly the way I describe. Newton comes along with one mathematical framework that embeds a set of particular symmetries. Then Einstein comes along and shows how that classical dynamics is just a special case of an even more general symmetries (needing even less in terms of those particular presuppositions).

    The way forward has thus been clearly marked for many decades. Okun’s cube describes how all the more particular physical schemes must eventually arrive at a quantum gravity theory that successfully generalises all three Planck constants - c, G and h.

    So you can challenge the game plan. And there is a whole academic industry in that. But also there is a reason why physics thinks it has got a grip on the way to formulate its ground suppositions, and even better, how to attain ever greater generality by eliminating the need for as many of them as possible. The correctness of this approach is proven by its experimental success.

    it is quite useful within certain limits as a way to anticipate our world, but it’s presuppositions are profoundly leas useful in making sense of human behavior, particularly the relation between affectivity, cognition and action.Joshs

    Now you are talking about the grounding of life and mind science. And I am the first one to say that physics - of the Newtonian kind - is inadequate to the task.

    Again, a Peircian dialectical logic is useful for physics in its present form, but at some point it will recognize the need to move beyond this, as many in philosophy and psychology already have.Joshs

    The irony there is the Peircean view is quite the other way around. It goes from the psychology of cognition to a description of the material world as itself a semiotic system. So it is as anti-Newtonian as you can get. But it also turns out to predict the informational turn that physics had to take once it encountered the dialectical marvels of quantum theory.
  • You Are Reaction Consciousness, A Function Of The World
    You do not seem to disagree that perception is reactionboagie

    Define reaction more precisely then. That way my disagreement, or agreement, would become clearer.
  • You Are Reaction Consciousness, A Function Of The World
    Perception is reactionboagie

    Psychologically, it is more complex. We react to what we think is important and ignore everything else as mere noise. So a perceptual response is a reaction to a meaning we read world. It is as much about what we can dismiss as about that which feels demanding of our attention.

    The world is what it is. Our response divides into some useful balance of “whatever” and “OMG!”.
  • The Mathematical/Physical Act-Concept Dichotomy
    The points you raise are fair but also already incorporated into what I say.

    My position is Peircean (CS Peirce). So the dichotomous division of things is only to create the separations that then allow the third thing of their interaction. This is the basis of a holist metaphysics.

    And then - again as argued by Peirce - modelling may be a human practice but it is also completely general as the logical process of semiosis. We don’t have a free choice about how to model as the essential reasonableness of a logical relationship is something which even the Cosmos can’t avoid in developing its own concrete state of being.

    So humans can construct models of reality anyway they choose. They can live according to magical or animistic thinking. But as soon as they start down the path to a dialectical logic, they are embracing the same symmetry-breaking logic of physical existence itself. It is the only route to evolving complex order as we find either in our models of particular physical phenomenon, or in physics as a general metaphysical phenomenon.

    This is why maths proves to be “unreasonably effective”. The process of its own development apes the constraints that self-organised consistency places on any form of uncertainty - informational or material.

    Maths of course has its own ideas about its metaphysics. It is torn by a reductionist dilemma about whether to call itself an arbitrary modelling exercise or a revelation of Platonic necessities. Folk get very passionate about which side to bat for.

    My point is that Peirce in particular offered a foundation that absorbs both horns of the dilemma to leave the Hegelian synthesis. The arbitrary and the necessary are the division that must emerge into the light to allow for the third holistic thing of them standing in an interesting variety of relations to each other.

    Maths enjoys its game of playing off absolute necessities (axioms) against absolute arbitrariness (x = pick any number). And in constructing two exactly opposed extremes, it makes itself a large enough model of a rational world to encompass a world that is in fact rationally organised.

    So it is all connected. There is a world to model because it has general organisation plus arbitrary detail. Physics works because it models the world as laws and measurements. Maths works because it enshrines that same division at a level so abstract it feels possible to talk about all possible worlds.

    Embrace the dichotomy and move on to find why triadic holism is what works when it comes to rational inquiry.
  • A New Paradigm in the Study of Consciousness
    Upvote because I have no idea what you've written, but believe it completelyT Clark

    Hah! I’m deep in the weeds on this stuff as I’m catching up on the vast amount of the new biology that has emerged this past decade.

    But the simple idea is that life needs some kind of foundation to justify its causal tricks, Life is all about top down information regulating bottom up physics. Genes and other kinds of signalling turn chemical and physical processes to their own selfish advantage by being able to control their rate and direction.

    The surprising realisation is that life can only do this if that physics and chemistry is critical or unstable - poised on a knife edge.

    So the usual physicalist presumption is that life - like any machinery - would want to be made out of stable, solid, stuff. You can’t build complicated structure from unstable material.

    But the opposite is the case. For information to be able to impact on the physical realm, it must be working with a material that is right on the edge of being tipped. The material must be fundamentally uncertain - in the way that BOTH the randomness of classical thermodynamics and quantum mechanics suggests - so that life’s informational records and memories can make a specific difference and give some action its chosen direction.

    Mr Quantum. I know you could decohere into a definite energy state at some random moment and place. But why not decohere over here in a few seconds in a way that feeds all that potential into this little game I’ve got going where I feed hot electrons through a chloroplast reaction centre and fix some carbon.

    So the big woo story is about how quantum weirdness subverts classical physics so badly that maybe even consciousness might be - hands wave furiously - a kind of coherent state.

    But life and mind are the products of a properly complex causality - one where management of instability is the general core principle. So life thrives on the edge of chaos. The more tippable the physics, the more profit there is for the information that can tip it.
  • A New Paradigm in the Study of Consciousness
    Brain waves are some large scale signatures of these oscillating superpositions.Enrique

    But biology tells us that every enzyme or other basic biological mechanism is both quantum and informational.

    So sure, biology is quantum in the sense that life employs artful mechanical structure to decohere quantum potentials in a controlled way. A photosynthetic pigment complex can delay the thermal decoherence of a photon-induced exciton long enough for it to "explore all available paths" in superposition and thus get conducted a considerable distance with 100 percent efficiency to the reaction centre.

    Biology is classically organised in a way that employs the quantum zeno effect and so creates a momentary channelling in a warm thermal setting.

    But this is classical control over quantum "weirdness". It is a harnessing of decoherence rather than letting the decoherence happen in the usual thermally random fashion. The coherence ain't the output driving the show. It is being able to monkey with the timing and place of the collapse which makes life special in being a classical system that is unlike other classical systems in having this kind of situated control over something that would otherwise just be random.

    The ability to do all this then comes from the frankly informational or semiotic aspect of biology - the vast empire of regulatory and feedback signalling that starts with the genetic code.

    The structure of an enzyme or photosynthetic complex is determined by a DNA sequence and its machine like protein manufacturing process. The millisecond to millisecond running of those structures is determined by a local regulatory flow of information - the web of molecular signals that tell the enzyme to switch its quantum-harvesting powers on or off.

    So right down where biology and neurobiology starts - the nanoscale where both quantum effects and informational effects loom large in the life of the classical molecular component – you have a complex causality going on. You have at least three kinds of account. The quantum, the classical and the informational.

    Decoherence then unites the quantum and the classical description as "dissipative structure" - the story of how thermodynamics organises molecular structure. And then semiotics unites information theory with classical dynamics - the story of how a modelling relation can be used to constrain physical systems.

    This is how the science of life and mind is actually going.

    You are peddling a lot of muddled woo I'm afraid. What is a brain wave when it's at home? What is a quantum superposition?

    One is a collective measurement that humans make - a roar of electrochemical activity heard from outside the neural stadium. The other is a map of quantum probabilities, not some kind of physical thing.

    To call either of them a wave, or a field, or anything else that smacks of literal material being, is merely an analogy. And even to call a classical wave or field a literal material is a folk physics fallacy.

    So the science of life and mind is in fact spectacularly interesting. And it is quite true that life has its informational roots into the physical world so deeply that it can regulate not just its classical behaviour but tap into its further quantum potential.

    To then turn around and conflate the three levels - semiosis, classical mechanical, and quantum mechanical as "all some kind of global substantial field phenomenon", is an insult to the freely available science.
  • The Mathematical/Physical Act-Concept Dichotomy
    This type of reasoning is tempting but can be fallacious, for the reasons previously explained. The concepts of mathematics are most commonly acknowledged as valid through proof; proof that heavily involves the form of computation. We can only create once we have seen for ourselves that the dualism was never wholly and fully mutually exclusive.kudos

    Your confusions look to stem from thinking there is a problem with dialectics. Yet reasoning depends on being able to divide the world in a way that allows it to be reduce to a model - a rational system of general rules and particular instances, or deductive theory and inductive confirmation.

    So maths has its general rules - its algorithms. Proofs show that the algorithms are sound according to various reasonable-seeming axiomatic principles. There are even more general ways to test some particular algorithmic generality!

    And then you can test an algorithm by running it with actual measurements, actual data, actual numbers instead of generalised variables. You can plug the particular values into the general statement and make a further judgement about whether the computational result seems sound when matched against the reality of whatever it purports to be modelling.

    So your muddles start by wanting to reject the dichotomies at the heart of any rational modelling exercise. They are in fact the essence of the intellectual enterprise. It's the same for maths, science and philosophy.
  • Is Dewey's pragmatism misunderstood ?
    Do you remember the uproar when Rupert Sheldrake said in a TED talk that the speed of light may not be constant? Led to a 12-month-long Wikipedia editing war and the video of Sheldrake's lecture being removed from the TED website.Wayfarer

    I didn't pay the story any attention being too familiar with Sheldrake and his shenanigans. But a quick check shows he was talking about variation in efforts to measure the speed of light as those became more precise from the 1920s to the 1940s.

    I'm not sure how - perhaps it was morphic resonant clairvoyance - but Peirce himself, as a world authority on precisely this issue, made the most cutting comment on that TEDx talk then.

    The non-scientific mind has the most ridiculous ideas of the precision of laboratory work, and would be much surprised to learn that, excepting electrical measurements, the bulk of it does not exceed the precision of an upholsterer who comes to measure a window for a pair of curtains.
    —Charles S. Peirce (1908)

    Actually this article is a good general account of how Peirce viewed the whole speed of light physical constant issue himself as he worked with Rutherford to use the assumption of light speed constancy to ground an international definition of a standard metre of length. And how that then inspired Michelson and Morley in their interferometer measurements that demonstrated a constancy that meant there was no dragging ether. Which in turn gives us Einstein and relativity, so building in that constancy as a brute fact ... until someone can step outside both the Newtonian and relativistic framework of cosmology to see how that constant got baked into the observable boundary conditions of the Big Bang.

    So this is what a scientist might mean by an evolution of c as a universal constant. First, it is acknowledged that constancy is an axiomatic assumption of the theory being conjectured.

    Peirce’s proposal was that “the standard length may be compared with that of a wave of light identified by a line in the solar spectrum.”

    The proposal was not without problems. It involved “the assumption that the wave-lengths of light are of a constant value,” Peirce wrote in 1879.

    That was shortly before the Michelson-Morley experiments, and he was worried about possible ether effects: “[T]here may be a variation in wavelengths if the ether of space, through which the solar system is travelling, has different degrees of density.”

    Astutely he added, “As yet we are not informed of such variation.”

    And likewise, Peirce made the crucial epistemic point that our uncertainty can only be constrained by acts of measurement. The opposite attitude of course is taken by charlatans like Sheldrake who find their "smoking gun" in scientific failures to completely exclude theoretical claims like morphic resonance, or as here, variable c.

    He realized that there is no such thing as absolute precision. “Dealing as they do with matters of measurement, [physicists] hardly conceive it possible that the absolute truth should ever be reached, and therefore instead of asking whether a proposition is true or false, they ask how great its error is.”

    And then, as I say, with the interferometer, science could make measurements sufficiently precise to be justified in excluding the possibility of a dragging ether as the material medium conducting lightwaves. The Newtonian expectation born of standard wave mechanics of a variable c was disproved by experiment! So the assumption of constancy could be taken to a new level of theory - a post-Newtonian one that did away the redundant materialism.

    To now show that c ain't a constant demands someone coming along with a post-relativistic theory that posits some new kind of measurement - one that produces observable change as some condition or other is changed. That would be science doing what it does. Provide measurable counterfactuals that will - within acceptable error - give a thumbs up or thumbs down.

    And that is the kind of interesting challenge that Sheldrake failed to supply. As usual.

    The Physics Today article is well worth the read for its biography of Peirce as well as his pragmatic philosophy.

    https://physicstoday.scitation.org/doi/full/10.1063/1.3273015
  • Is Dewey's pragmatism misunderstood ?
    Peirce always said that what we see as scientific laws are simply habits of nature.Wayfarer

    Yep. That link is exactly the kind of thing I am talking about.

    Even before relativity was discovered, Peirce was already proposing experiments to check if space was always necessarily flat, or had just evolved towards this generalised Euclidean geometry.

    So his metaphysics spoke to a developmental model of the Cosmos. And science will get around to treating this as a routine insight rather than the weirdest thought anyone ever had.

    That's the fly in the naturalist ointment, ain't it.Wayfarer

    Well you know that I have no time for metaphysical monism or dualism. Only the irreducible complexity of a triadic metaphysics - a principled and rational holism - could make sense of nature.
  • Is Dewey's pragmatism misunderstood ?
    Except that for Peirce, reasoning (semiosis) as studied within the normative science of logic (semeiotic) is emphatically not a psychological process. Rather, psychology is a special science that studies the actual thinking of individual embodied human minds. In other words, psychology depends on logic (and metaphysics), not the other way around.aletheist

    It starts with an examination of our psychological processes - the habits of epistemology - and becomes a claim about logic as an ontological reality.

    Of course, reasoning being abductive, there isn't a strict arrow from one side of the argument to the other. The whole has to be grasped at least vaguely from the outset.

    But this is the deep point of Peirce's philosophy.

    The usual view (once you have got beyond naive realism to Kantian modelling) is that the connection between our psychological models of the world and the reality - the thing in itself - is dangerously arbitrary. So there is our mind, and there is the world. Two different kinds of thing.

    But Peirce argues instead that the trichotomy is simply a logically necessary Platonic-strength form. It is the natural and inevitable shape of any causal structure, any kind of organised set of relations.

    Hence both mind, as an epistemic system, and world, as an ontic system, share the one semiotic logic, the one natural and inevitable order.

    It is something we can suspect from discovering it as the shape of our own rational process of inquiry - our best reasoning method. And it is something we can observe from the fruits of that same process of inquiry as our best explanation for the order of the Cosmos.

    Of course, Peirce's actual efforts at a pan-semiotic ontology were limited by what was known scientifically in his day. And they were also confused with theistic tendencies.

    But we now know enough about biology and neurology to say how semiotic these natural processes indeed are. And also enough about cosmology, thermodynamics and quantum physics to see how a triadic holism is a necessary perspective, while not going overboard and calling it actual semiosis.

    Semiosis as a natural process is really the view from somewhere - the somewhere that is an organism modelling its environment. The Universe is only pan-semiotic in the sense that it is kind of the view from nowhere. It is its own model in terms of its information or boundary conditions.

    Except that for Peirce, the growth of concrete reasonableness is discovered within the normative science of esthetics (not metaphysics) as the only aim that is admirable in itself.aletheist

    I'm familiar with his archetectonic. But I'm not interested in contributing to some historical retelling of Peirce as an eccentric loner obsessed with fitting everything into ever more recursively complex triadic categorisations. No wonder people get switched off!

    I am talking about him as a key figure in the long organic tradition that is finally informing current science, especially in biology. And the key thing is the holism with which he shows that the triadic semiotic relation mediates all reality - epistemic and ontic.
  • Is Dewey's pragmatism misunderstood ?
    Again, his categories come from phenomenology, which is the most basic of the "positive sciences" and depends only on mathematics, the discipline that draws necessary conclusions about strictly hypothetical states of things.aletheist

    Yep. As I said, there is the triadic epistemology that is a model of psychological processes of reasoning. That ain't troublesome to any Popperian view of scientific method.

    Then there is the challenging insight that as the mind goes, so does nature. The Cosmos could be understood the growth of universal reasonableness - an ontological strength application of the trichotomy.
  • Is Dewey's pragmatism misunderstood ?
    Curious now.
    @Ciceronianus the White - what did you think of apokrisis:
    Amity

    I've never been much of a fan of Pierce's Firstness, Secondness and Thirdness and his Triadism, though,Ciceronianus the White

    It might help to separate pragmatism as a triadic epistemology from semiosis as a triadic ontology.

    So both are metaphysical projects in concerning how we can know and what then may be. But Peircean pragmatism became uncontroversial as it was easily assimilated to the general mainstream notion of science as practical reasoning, while Peircean semiotics remains a radical challenge to the reductionist presumptions of that same scientistic community of thought.

    So where Peirce was "right", it was obvious. And in fact, folk preferred the simplest telling of the story - the Jamesian caricature.

    Then where Peirce was "weird" was where he laid out the argument that reality itself involves an irreducibly triadic holism and so could never be actually broken down into the beloved monistic caricature of circa-1800s reductionism - the ontology built on atomism, mechanicalism, locality, determinism, etc.

    So every right minded rationalist who believes in the wisdom of the scientific method and analytic philosophy is going to have the same received view of how to react to Peirce.

    They will read his pragmatism as a ringing endorsement of the best reductionist tendencies of the scientific method - while utterly missing the triadic holism of the psychological model.

    And they will run with horror from the triadic holism of the ontology, even though 20th Century physics already delivered the holistic shocks of quantum theory, relativity, and even showed how Boltzmann-era thermodynamics was merely another Newtonian "special case". :razz:
  • The shape of the mind
    Specifically, I like the notion of mental shape because shapes have specific properties, and our properties or abilities 'fit' with what I've described as environmental gradientsPantagruel

    In the ecological or enactive approach to psychology, perception is a recognition of environmental affordances - https://en.wikipedia.org/wiki/Affordance
  • The shape of the mind
    Speaking from personal experience, the "horizon" of my awareness now extends much further, encompassing not only places I've been, different types of new things I have encountered, but, most importantly, awareness of ongoing patterns of things about which I have gained knowledge.Pantagruel

    If consciousness is understood as a pragmatic modelling relation "we" have with "the world", then this ever-larger ability to anticipate the actions and reactions of our environment are what we would mean by a "deepening" consciousness. We learn from experience to predict the world better. And to do that in terms of an "us" or "ego" that has some ever-larger set of plans and goals.

    So this is what sorts out the homuncular circularity of viewing consciousness as some sort of state or substance - the paper being folded into origami forms - that stands outside the modelling of the world. The egocentric nature of the point of view that develops with experience is itself part of the model. It is the "other" to the "world", indeed.

    The broader the outward horizon - seen in terms of all its potential goal satisfactions - the matchingly more focused and narrowed our sense of self becomes.

    A child has a less exalted sense of selfhood to match its less well stocked collection of knowledge, habits and goals. Its sense of self is vaguer, less temporally organised in terms of a complicated life-defining agenda. Intentions are rather immediate - food, love, comfort. Only with socialisation does the self become defined as that part of the modelling relationship which stands for a focus of rational, rather abstracted, economic and cultural intent.

    So the wider the field of view that gets constructed, the more pointlike in terms of a time and place becomes the anchoring self. A child is just at the circus. The adult may have a thousand concerns about other things they could be doing, other ways they ought to be reacting, to the scene at hand.

    "Consciousness" is thus a process, a functional or pragmatic relation, which is formed by its two ends, its complementary poles. At one end is a generalised point-like sense of self as the potential actor in a world. At the other is "the world" as the complete set of potential actions as currently best understood by this actor.

    So consciousness, as a modelling relation, starts with a strictly reflexive sense of selfhood and worldness in some primitive, memory-limited, organism like a sea slug. In other words, no real richness in terms of either an experienced world or an experienced self. No second to second drama that comes from having a capacity for attentional processing on top of habit or reflex learning.

    But any animal with a large brain and plenty of memory/planning circuitry can run a model with a matching level of self vs world complexity.

    Then humans added language on top of neurons and genes as a way to trap or organise information. Socialisation took self~world modelling to a completely new articulate plane.

    A cat is not self-conscious. That is, it doesn't model the world as a complex web of social relations between autonomous conscious selves. So it just exists rather directly in the world much like a child, but without even a child's beginnings in a socially constructed state of awareness.

    Us humans are self-conscious because the reality model we learn does have us included as social actors. That huge extra demand of being an autonomous willing individual is the further language-scaffolded habit we learn to take on.

    This makes the selfhood part of the modelling exercise even harder to actually appreciate.

    It is part of the mythology that there is this world "out there" exactly as we see it. Ripe with possibilities if one is an actor able to spot them.

    But also, a social model of the self also demands a Cartesian-level separation. Fullscale dualism. We are supposed to be minds - brains in a state of consciousness - that thus exist somewhere beyond the world ... and even, as we become science-informed, the brain's modelling of this world.

    The homuncularity in conventional understanding of "consciousness" is not so much an intellectual failing as a key part of our social training. It is a displacement out of the childish immediate and a projection of the self into the same realm of rationality and abstraction as matches our socialised view of our working environment.

    It has to be that we push back the cognitive horizons in both directions. We push the anchoring self well outside animalistic immediacies - shove the psychological vanishing point right outside our sense of the here and now of material being. And in that way, we set ourselves up to be at home in the much larger world which is a world filled with all the potential satisfactions to be explored by a rational, articulate, social and abstract res cogitans.

    So we are the origin point formed by having so many "shapes" of action or states of intent available to one coherently integrated neural system. Or as humans, the origin point for the space of actions available to a suitably encultured and economically enabled creature.

    That is how I see your initial point about available actions being the definer of any sense of conscious self. As a reciprocal deal, constructing that space of shapes is also the construction of a point of origin or vanishing point that gives the modelling its coherence - its experienced sense of being centred as well as open-ended.
  • Is vagueness a philosophy?
    Yep. Zeno’s paradoxes also hinge on this logical issue of trying to reconcile what conventional logic has rent asunder.

    Is reality discrete or continuous at base? Or are these just polar extremes that derive from constraints placed on the third thing of a vagueness, a Firstness, a potential?

    My argument is that predication is vague. But that is not a problem because we can sharpen it to the degree that pragmatically matters by adding constraints. We can imagine the extreme cases - Platonically or mathematically perfect discreteness or continuity. And then reality can be measured against these contrasting conceptions.

    We can see that there are three thing or four things at a glance if they are three or four things like grains or sand or grains of wheat in a flat scatter. Five at a glance is harder - more reliant on a telling structure. Then six needs that structure in the way we can recognise and visualise the six dots making up that number on a rolled die.

    If this recognition ability can be demonstrated in experiments, it does not seem so hard to believe that I can actually visualise as I believe that I do.
  • Is vagueness a philosophy?
    I applaud your effort, but visualizing a square (one shape - or (1) item) is not the same as (4) distinct grains.Don Wade

    I can picture four grains without a problem. I merely point out the psychological machinery involved. It helps to have the simplest and most regular global arrangement in mind, even if that geometry of relations is then also suppressed to a large degreed to emphasise the distinctness of each grain.

    Again, I refer to the example of the Rubin Vase:Don Wade

    A bistable stimulus is a rigged and artificial cognitive situation. So it shows interpretations of scenes can be pulled two ways - if the scene is designed to have precisely that characteristic. The image is created so that it is literally a black and white, cut and dried, situation. Pick either one vase or two faces as your only legitimate choices. The PNC applies. In fact, that is what the image actually illustrates.

    But if we are talking about how perception applies to the real world, then that is where vagueness certainly becomes a valuable expansion of the logicist's impoverished world model.
  • Is vagueness a philosophy?
    Yes of course "spectrum" might suggest unbroken continuity.bongo fury

    It both might and usually does....

    A spectrum is a condition that is not limited to a specific set of values but can vary, without steps, across a continuum.

    No, not at all, the discrete version is enough.

    All of your strenuous metaphysics might be missing the point.
    bongo fury

    I think I was explicit enough. The problem is with the kind of monadic logical system you are championing. A logic of vagueness - a triadic logical system - is needed to situate the sorites paradox in a more intelligible world than that provided by mere counting.

    I haven't missed any point. I just supplied a missing meta-logical argument.
  • Is vagueness a philosophy?
    The brain cannot visualize (4) grains of sand that are close to each other. In order to visualize four grains of sand the brain must employ a trick - that is, it will visualize two groups of (2) grains each.Don Wade

    What? Visualising four grains seems easy. Especially if they are arranged as four corners of a square.

    An irregular group of four grains is more of a stretch. But we can also learn that as a pattern.

    So the point is that all perceptual judgements rely on both the local and global. We are visualising the relations as well as the parts the entire time.

    A single grain has the global feature of zero relations. We are actively suppressing the sense of any connectedness to extremetise our mental state of representation.

    Two grains are related in linear fashion, Three grains is triangular. Four as a square. We could go five as a pentagon. But also this simple conceptual geometry is under strain as the relations - in any idealised group - are all members to all members.

    So all four are touching, and all five are touching. Eventually, and quite quickly, we cross over from the sense of looking at a scene dominated by parts to a scene that is a mess of relations. Relatedness becomes what we "see" if pushed to give a logically dichotomised reply. We see simply "many grains" with the sense of isolated parthood maximally de-emphasised.

    This thought experiment demonstrates why the paradox is not based on "vague predicates", but is based on how the brain visualizes images.Don Wade

    My argument is that the brain visualises by dichotomising scenes. And that in turn relies on hierarchically organised states of constraint. One metaphysical aspect of the scene has to be suppressed at the expense of the other ... so as leave the other as the one being emphasised.

    Your choice in a world that is only ever relatively lumped or split is to visualise its degrees of connectedness or division, its degrees of integration or differentiation. And once you get into counting games - mathematical-level semiosis - that runs into cultural misconceptions about the world actually being always definitely one thing or the other. Like either discrete, or continuous, as that is what the formal laws of thought appear to demand.

    But language-level semiosis is more relaxed - more tolerant of vagueness or ambiguity. And that better suits the real world of social actions. You don't have to force everything you believe or perceive into rigid or permanent categorisations. A heap is whatever suits a community of speakers in the pursuit of their social purpose. A pseudo-mathematical precision is making a fuss about differences that don't make a sufficient difference.

    Then actual brains are evolved to serve an even more relaxed level of judgement - neural semiosis. That is why an animal sees the world largely as one or many. This human level obsession with either linguistic or mathematico-logical clarity is baffling to them. It is not in fact wired into the brain as a habit of visualisation.

    So another problem with your analysis is mixing up levels of semiosis. At least three levels of "visualisation" are taking place. Each can try to sharpen the distinctions naturally offered by the one below. But the important question then becomes "for what purpose?"

    Brains just want to visualise in a general lumping and splitting fashion. They want to present a world divided into focus and fringe, grouped and scattered, individual and continuous, etc, etc.

    Language has its social purpose. Maths and logic move up to the Platonic abstraction of countable numbers - data, or information bits – that represent reality as if it were digitally discrete. Totally unambiguous.

    This is just a useful extremetisation of metaphysics. Treating reality as a material machine is the basis of our technological way of life.

    But for philosophy, this reduction to the countable, the digital, is a problem. It builds in a big mistake. And that is what a Peircean stepping back fixes. It makes it explicit that logical semiosis is a triadic sign relation. And vagueness is the underpinning to that.
  • Is vagueness a philosophy?
    We can only have one group of properties in our mind at any specific time. Such as: we can focus on the grain of sand, or the pile of sand - but not both. (That is, not at the same time.) This is similar to the (Rubin Vase) analogy. We will be aware of the other group - but the mind can't visualize both groups at the same time.Don Wade

    What you are talking about is taking two different attentional points of view.

    So in general, the brain is evolved to characterise scenes in terms of dichotomies that symmetry-break reality in its most informational or meaningful way. That is where you get metaphysically-broad operations like lumping vs splitting. The brain is set up – with left vs right asymmetry, for instance - to either group or individualise some clutter of visual elements. And by habit, we will learn to read the world in a way that is most informational in terms of our wants. We will automatically lump and split as appropriate.

    But both the parts and the wholes get represented because the brain is actually dichotomised in its wiring. And by attentional choice, you can switch between points of view, either focusing on the parts or the wholes.

    The contrast can be made stark by a choice of artificial stimuli like letter navons.

    Hierarchical-stimuli-used-in-the-Navon-task-The-global-level-was-defined-as-the-large.png

    So in my view here, the connection between hierarchical levels and the sorites paradox is that the brain is evolved to apply a dichotomising logic on the world as that is the view which always must deliver the maximum salience or useful information. The brain is set up to say if it ain't lumped, it must be split. And vice versa.

    And dichotomies are symmetry-breakings taken to the limit - and hence result in the fundamental asymmetry represented by a local~global hierarchical division.

    Hierarchies are the (triadic) organisation that represent the final outcome of the (dyadic) act of dichotomisation or symmetry-breaking. And then that leaves vagueness as the (monadic) symmetry - that starting state of cognitive indeterminacy or unconcern - which completes the 1,2,3 that is the Peircean logical system.

    What I am trying to point out is that it is no surprise the brain is structured to process the world in this one particular way. It is the only logical way.

    Cognition always must start with generalised indeterminacy - anything could be the case. Then it must apply some filtering dichotomy - anything might be the case, but let's see how it might fit some formal opposition of "definitely more this than that". The clarification provided by being able to apply the law of the excluded middle.

    And eventually, as the brain evolves to process real scenes in the most efficient and informational ways possible, the asymmetry of a hierarchical organisation will emerge. The proper view of any scene will be divided along local vs global analytical lines.

    Lumped vs split, individual vs collective, salient vs peripheral, etc.

    So it is not that the mind can't entertain two opposing views at the same time - which makes it sound like some kind of processing shortcoming. Instead, the whole point is to be able to emphasise one view over the other potential view. It is the feature rather than the bug. We get to pick the version of reality that is most functional or informational in light of our perceptual goals.
  • Is vagueness a philosophy?
    ... they all operate perfectly well as alphabets (or conceptual schemes) of two characters (concepts) separated by a comfortable no-mans-land. The puzzle is how to look closely at that without it reverting (under however much cover of mystical pazazz) to a mere spectrum.bongo fury

    A spectrum suggest unbroken continuity. But the sorites paradox demands discrete acts of addition or subtraction. So we have the two poles of a metaphysical spectrum right there. The discrete~continuous. And the confusion arises in trying to satisfy these two formally antithetical constraints at the same time.

    The point on the spectrum marking the division between heap and non-heap shouldn't ever be just a one single grain jump if it is to satisfy the metaphysics of continuity. But a multi-grain leap - say a jump from three to eight - is deemed a failure by a metaphysics of discreteness, as each individual addition or subtraction is taken as separately ennumerable.

    So all you seem to be pointing out is this clash of metaphysics. The world must either be discrete or continuous at base. Pick your poison. The PNC and law of the excluded middle leave you no choice but to take a side in traditional logic.

    But that is where a logic of vagueness comes in. It can add a third metaphysical-strength ingredient to the story. It says that both poles of any such categorical dichotomy must arise - by reciprocal constraint - out of the common resource which is a vagueness.

    This is the developmental view, or symmetry breaking view, of nature. Any sharp disjunctive distinction - like that your two "mutually exclusive and jointly exhaustive" options are either discrete, or continuous – must arise from Peircean Firstness, or a ground of simple unformed potential. A vagueness where the PNC has yet to apply as a dichotomising constraint.

    And as I say, the twist is to see ambiguity as a general resource rather than a fundamental problem for logic.

    The sorites paradox - in revealing a fundamental clash between the two notions of the continuous and the discrete - suggests something is broken with logic's three conventional laws. But the flip view is that it shows that language only needs to constrain interpretation to the degree it is contextually useful. We can live quite well with inherent ambiguity because metaphysical dichotomies speak of the opposing limits of being, not two actualised states of being. One can approach either limit of being as closely as one likes - by constraints that exclude the other pole of being - but one can't actually reach and exist at that limit.

    So neither perfect discreteness, nor perfect continuity, can exist alone. Each quality is always held to be relative to an act of limitation on the presence of its "other".

    That gets tricky with the sorites paradox as it asks you to mark a definite transition point on a spectrum. And intuitively - to honour the metaphysics being invoked - you don't want to give privelege to either an answer that is clearly discrete (one more grain does it), nor clearly continuous (eventually and smoothly you have enough).

    Some kind of halfway house answer must be the case - one that speaks to the continuous as much as the discrete. And that is where the undecided, unshaped, potential of a vagueness comes in to rescue you - if you are willing to expand your logical system.
  • Is vagueness a philosophy?
    In the sorites-paradox example the group of sand-grains is at one level, and the sand-pile is at another level. We can have knowledge that both can exist at the same time but they exist, in the mind, only at different levels - hence the paradox. The concept of levels solves the paradox.Don Wade

    But is it too pernickety to insist that a single grain is absolutely and obviously not a heap? That's what I was trying to get at.

    So if push comes to shove, just specify the precise (possibly unitary) size of heap. Everything is on a spectrum.
    bongo fury

    So as Don says, cognition is a hierarchical modelling of the world. We are psychologically evolved to divide the world according to the contrasting extremes of what might be the case. Either we focus on the sand pile as a group of individuals or as an individual grouping. Either we are lumping or splitting. Either we are seeing signs of larger meaningful order or local random accident.

    But then in fact, this categorical division allows us to construct spectrums of possibility. We can see the range of different balances of lumped~split, grouped~scattered, general~individual that lie between the polar extremes.

    And likewise, given a spectrum defined by two complementary opposites, there must be the third thing of some exact borderline case - the balancing point where judgement could go either way. That is the point of a Gestalt bistable stimulus. It illustrates how we can be tipped back and forth where two opposite interpretations – grouped or scattered, cohesive or disorderly, lumped or split - are in some exact state of tension.

    There is a mid-point on the spectrum where one answer becomes as good as the other. There is a symmetry or inherent ambiguity - a logical vagueness.

    As Peirce defined it, vagueness is that to which the principle of non-contradiction fails to apply. You could say that the point at which a scatter becomes a heap, or a heap a scatter, is neither definitely the one nor the other. There is no fact of the matter. Or rather, the right predicate value is "vague".

    So with the Sorites paradox, the ambiguity of the transition from (purposeful and collective) heap to (random and individualistic) scatter should be what is expected, not bemoaned.

    It is not helped that the set-up of the paradox contains many confusions. Is a stack a heap?

    An ordinary language definition of "heap" suggests that a pile is being created in one place in a constrained fashion. But the pile is meant to arrive at its heaped arrangement - that is, grains piled on each other - in random fashion.

    So a scatter of grains lacks any grains on top of each other as well as a lack of clumped grouping. Every grain qualifies as a solitary individual by usual standards. (As long as they don't also lie on a hot surface that is melting them to a collective puddle of glass.)

    But if we were to pick out the Platonically minimal geometric arrangement of a trihedral stack - one grain balanced on top of three like cannonballs – would this be a heap? Could such a clear lack of random organisation logically meet the definition of a heap?

    So in a world of pure Platonic order, there is a smallest heap - a minimal pile of regular spheres. But our ordinary language definition of a heap is based on some key supplementary notions about nature. We see the Humean causes of a heap as a combination of order and chaos. And that introduces plenty of ambiguity.

    A stack of cannonballs permits neat and direct counting of the parts. And we get a simple answer because of the extra constraint of being able to order the whole situation. There is only ever the one answer to what is the fewest number of perfect spheres that can form some stack of round objects with more than a single layer.

    Well one cannonball could be perfect balanced on another. However that reveals another ordinary language constraint. A stack should be stable. And that normally means wider at the base. And actually held together by friction, so the spheres can't be too smooth, or on too smooth a surface even.

    You get the idea. Everywhere you turn, you start to encounter the ambiguous or vague elements in your little logical fables about reality.

    But anyway, a stack of four sand grains seems too small to be a randomly accumulating heap. Less than four is always going to be layer at best, a scatter more likely. Yet how many more than four is evidence for a properly random pile? Doesn't this ontological demand for randomness make that answer itself statistically variable? Isn't that perhaps a key, and indeed logically valid, reason why folk don't want to commit to some hard number of sand grains? Intuitively, it would be improper to be able to mark some definite point where the heap is defined by some Platonically fixed number.

    I could go on. Science and maths can keep refining our concepts of the world, and hence our capacity to be more pernickety.

    One could appeal to sphere packing theory as that indeed gives a narrow answer. Orderly stacking of cannonballs can achieve a volume-filling density of 74% while a random packing - if you could only shake them about inside a crate - arrives at a 64% density. Or at least that is the statistical average enough shaking would converge on after a reasonable time.

    Maybe - psychologically informed by this new information – we might see why a heap of say five or six grains might be enough to qualify as both a pile dropped in the one place, yet with an irregular enough structure to indeed count as an untidy heap rather than an orderly stack.

    We can eliminate vagueness in our concepts of nature by adding such constraints to our definition. We can increase our pernicketiness ad infinitum.

    But that in turn presumes nature to be counterfactually definite all the way down to its atomistic foundations, not vague, indeterministic, stochastic or random in any meaningful way.

    And we know from quantum theory, spontaneous symmetry breaking, and other modern physics that that ain't a true fact any more.

    So a logic of vagueness is needed just for epistemology - our conceptual reasoning about the world. And it is needed also for ontology, as ambiguity in the guise of symmetry, tipping points, emergent dynamics, quantum indeterminacy, etc, is now an accepted aspect of reality.
  • Is vagueness a philosophy?
    Don't you think it would seize up for the opposite reason, too?bongo fury

    I thought I was clear that fruitful oppositions are what it is always about. So you can be too vague, and also too pernickety, in your language. As any good artist knows, what you leave out matters as much as what you put down.

    The interesting (and paradoxical) thing is that the clarity is so easily achieved.bongo fury

    Or that it is always relative ... to some larger purpose.

    The sorites paradox is a sharp example that should bring attention to the essential ambiguity of language. The "paradox" is to read this as evidence that language fails in its logicist ambitions - that you can produce falsehoods from apparently impeccable reasoning.

    But it is logic that builds in its own problems by traditionally seeking to exclude context from judgement. It fails - if you follow it strictly - to take advantage of the fundamental resource that is vagueness.

    [1] Tell me, do you think that a single grain is a heap?
    [2] No of course not, and I know I'm a long way from the smallest number of grains that could possibly be the smallest heap! Far enough that a single grain is an obvious case of a non-heap!

    Of course, later on, the same player may feel differently...

    [1] Tell me, do you think that a single grain of wheat is a heap?
    [2] Well, certainly, it's the very smallest size of heap.

    Game over. People often finish up claiming 2 had been their position all along. Perhaps it should have been, and the puzzle is a fraud.
    bongo fury

    How is that my position?

    My own response would be to question your claims of being certain that a single grain is a single grain. One can't exclude uncertainty on that score either. Maybe another grain is hidden behind it, or it is in fact a swarm of grain-lets, or a hologram, etc, etc.

    To conclude that anything is not a heap is a matter of deliberative judgement as much as deciding a heap exists. You can call a single grain of wheat a non-heap "for all practical purposes", yet there is still residual vagueness or doubt in such a claim as there always must be.

    You can't take one side of the paradox for granted as "a fact" and the other as always "a judgement". That would make life far too easy.
  • Is vagueness a philosophy?
    ould your larger interest example be similar to a reference to an analogy of a "grain of sand or a pile of sand"? If so, I would like to offer a solution of how to view the larger picture without reference to vagueness.Don Wade

    Not sure what you mean. But what I wanted to emphasise was how a developmental view of logic leads to Peirce's pragmatism and even Aristotelean finality.

    The usual view of logic is Platonic. Truth just obtains. Facts are just facts. Vagueness is just a variety of epistemic ignorance or confusion. Etc.

    But admitting vagueness to the fold says truth is contextual. It serves some larger purpose that happens to be operating. The purposes of some inquiry must be taken into account.

    So if you are talking about Sorites Paradoxes, it does become an active matter of "who cares?". A heap or the characteristic of baldness are higher level constraints that can be imposed on acts of measurement. And the "paradox" is that in ordinary conversation, it is fine that the numerical precision is rough. We can recognise a group or a bunch or a collection at a glance. For our pragmatic purposes, there is a heap or a bald person. And we can be looser or more precise about the matter to the degree we might agree that a less vague, or even more vague, definition is useful.

    And this would be a positive feature. Language would seize up if it had to be exact beyond the point that exactitude is useful. In semiotics, meaningfulness is measured as the differences that make a difference. So the more differences that we can definitely ignore - treat as the meaningless and vague backdrop - the more meaningful is whatever it is that we choose to note.

    So everything depends on these kinds of reciprocal relations. More of one means less of the other. And "more or less" is then the Goldilocks balance point where you have struck some kind of useful and stable equilibrium balance in terms of knowledge, truth, whatever?

    "Is that man bald?" "Is that a heap of wheat?" Given a logic of vagueness, more or less becomes the best possible answer. It's precision is contexually-based as it relies on the larger circumstance of "who needs to know".

    Truth is no longer Platonic but dependent on some collective and purpose-imbued point of view. And this larger view can change its mind. It can insist on a sharper dividing line as to a definition of baldness, or relax it as well. A community of inquirers will settle on the habit of thought that provides the most meaningful-possible boundary line.

    But what kind of larger interest are you thinking about that does not rely on the vagueness of a "for all practical purposes" more or less answer?
  • Is vagueness a philosophy?
    For the last 2,500 years man has pretty much accepted the findings of the early philosophers (especially Aristotle). We looked at objects as being defined as having boundaries (whole objects).Don Wade

    CS Peirce made a big effort to bring vagueness into logic. And ironically, in my view, this demands being quite precise about a definition of the ultimately indefinite. The vague is the “other” of the crisp or bounded.

    The ontological consequence of this is that nothing real can be either completely bounded nor completely indeterminate as both logical categories would be defined in dialectical relation to each other. So the most certain thing has some residue of vagueness, and vice versa.

    For Peirce, it was also the cornerstone of a developmental approach to either epistemology or ontology. A process philosophy.

    The ordinary view - as taken by Bertrand Russell - is that the world is always definite. Even a smudged photo of your mother is still exactly whatever it is as an image on close inspection. But from a process point of view, this would be the fallacy of misplaced concreteness.

    So Peirce equated vagueness with his firstness, or the fundamental spontaneity of possibility. At base, uncertainty exists as that which can then be shaped into counterfactual definiteness. Certainty then becomes the other limiting pole of this process of development. It is the world becoming as concretely what it is as much as is possible, or as much as it matters.

    This developmental view is thus semiotic, or brings the further question of meaning and purpose into play within logic or ontology. Vagueness becomes negated to the degree there is some larger interest in play.

    Is that a smudged photo of your mum? If it matters, more work can be done to sharpen the image somehow. A statistical view could be taken that assigns a probability.

    So yes. Conventional logical thought hates the very notion of vagueness. It is set up to exclude it. That is what Aristotle’s three laws of thought are about.

    But then, there has to be that vagueness to exclude. And you can move things up a meta level in logic by incorporating vagueness as something definite within your general epistemic system.