Comments

  • Thoughts on Epistemology
    Actually, I thought the thread was going quite well.Banno

    So when "you" believe something, is it true in some self-transcending sense, or just true for you?

    Answers on a postcard as usual.
  • Do numbers exist?
    but I will just point out that by 'in-finite' I don't mean to refer to "an infinite amount"; all amounts are finite.Janus

    Yes. You sense the difficulty and try to avoid it.

    I seek to make the difficulty plain and so force a definite choice.

    If it "contains the potential" for intelligence and intelligibility, then it would seem to make more sense to think of it as in-finitely and eternally intelligent, than to think of it as brutely blind.Janus

    But here you have to offer the dichotomy on which your notion of intelligence, or intelligibility, or creativity, etc, is based.

    You have to show that you are un-breaking a breaking rather than extrapolating a quantity to arrive at an "unbounded" quality.

    My view of the development of human intelligence and creativity is scientific. It has that tested evidential support. And so I don't think of these named qualities as being in anyway physically fundamental or general. There is nothing more particular and emergent in the known Universe than the complexity of a living human nervous system.

    So the dichotomy, the formal contrast, is between complexity and simplicity, between negentropy and entropy, between organismic level self-interest and purpose and physical level disinterest and blind tendency.

    Peirce then connects the simple and the complex in psychological or phenomenological language. Intelligence (or any evolutionary/adaptive process) needs to combine selection pressure and spontaneously arising variety. Hence we arrive at the story where firstness equates to absolute blind spontaneity or tychism, and thirdness equates to absolute firm habit, the continuity of evolved constraints or synechism.

    So now nature can be "intelligent" in that it has this evolutionary logic, this intelligible structure. A fruitful marriage of chance and necessity, freedom and constraint.

    Human-style intelligence and self-centred purposefulness falls out of the picture. It is a general possibility taken to a particular extremity. To talk of a still "higher" creating intelligence has to be a continuation of that particularisation. A super-mind would have to inhabit a super-body as well.

    Now we can imagine such a next step. The idea of artificial intelligence and the Singularity is one such extrapolation. Humanity could get downloaded to a technology that spreads itself across inter-galactic space.

    A nice conceit. But it does correctly extrapolate whatever it is that we could mean by human intelligence and creativity as qualities that might increase in general quantity. If life and mind is negentropy that depends on accelerating the Universe's entropification, then spreading ourselves across the Universe to tap its physical resources at every possible location is what natural philosophy would predict.

    But anyway, this is the issue. You have to choose whether you are magnifying a quality or dissolving a quality. And how can you head back to the origins of a quality by simply increasing the amount of it?

    You claim you are not increasing the amount in trying to generalise the quality. But really, you are. You are imagining a little bit of local stuff spreading to take over everything. And that is simply shifting reality in the direction of one pole of some dichotomy. You are arguing that eventually - go far enough - and you lose sight of the other pole.

    So take embodied human intelligence and creativity. You want to lose the necessity of the body and imagine the mind spread generally.

    The Peircean view is pragmatic - mind arises as a way to regulate material physics, accelerate entropic flows. Mind makes no sense, it can't exist, unless it has that physical context.

    So you are imagining a nonsense - a mind without that "other" which is the source, the cause, of its being.

    I can see how tempting this move is. We are so used to thinking in terms of dualism. It is simply believed that mind and world are already separate, so both are free to grow in-finitely in their own realms.

    But that is a dualistic metaphysics. And it doesn't in the end work. We know that. Hence even theists do try to find a more organic or immanently self-organising story occasionally. And Peirce spells out the logic of that.
  • Thoughts on Epistemology
    Either language absolutely captures the truth of the world, or the truth of the world absolutely escapes capture by language.

    Or in fact neither, but - pragmatically - somewhere in between. ;)

    These thread never get anywhere because they leave out the further issue of optimisation. If it is any kind of "self" that is being defined in contrast to "the world", this very separation itself has to develop and be reinforced by the "truth telling".

    So there is a further optimality constraint on the whole business. Truth has to be effective at driving a wedge between self and the world, between the phenomenal and the noumenal. This is the epistemic cut argument. You don't want the truth dissolving the very division on which a self~world relationship is formed.

    That is why semioticians stress the fact that minds are focused on understanding the world in terms of signs. The fact that we don't have transcendent access to the thing-in-itself is the feature, not the bug, of truth-telling. It wouldn't work if we didn't see the world through an utterly self-interested lens - as it is "seeing" in this fashion that does give rise to "the self". There would be no witness to "the truth" if truth-telling did not play the part of creating this witness.

    So - because these threads always get stuck on idealism vs realism, the absoluteness of solipsistic isolation vs direct access - they don't really get into the meat of the issue.

    Pragmatism picks up the story where it is accepted that truth-telling is a practice with a purpose. And then we can start to appreciate how a somewhat counter-intuitive optimisation principle must apply.

    To form "a self" requires not directly "knowing the world".

    If the body just responded directly to the "sensory facts" of world, that would be useless. Light, soundwaves, physical knocks and scrapes, would just register as energetic deformations. Some kind of heating or damage.

    The nervous system exists to transcribe physics into information. The energy that composes the world becomes an interpreted set of symbols - an umwelt. As much as possible, the reality is made something simple and imagined. Organised in terms of the image of "a self" in its "world".

    So pragmatism is better epistemology as it is a theory that accounts for observers along with the observables. They are two sides of the one (semiotic) coin.

    If your epistemology fails to speak about what constitutes the observer, and just argues about what is observable, then of course it won't get anywhere. Frustrated with itself, it can only wind up in the angry silence of quietism. One liners that say nothing in their ambiguity.
  • Do numbers exist?
    So, my question is, just as with finite temporal being we extrapolate to in-finite, eternal being; then why not from finite, temporal creativity to in-finite, eternal creativity, from finite, temporal intelligence to in-finite eternal intelligence, from finite, temporal order to in-finite, eternal order, and so on?Janus

    The extrapolation has to reverse a dichotomistic separation. It has to unbreak a symmetry breaking to recover the original symmetry. So there is a particular logical model to be followed.

    So for instance, if existence depends on the actualised contrast between flux and stasis, or chance and necesssity, or discrete and continuous, or matter and form, etc, etc, then that definitely present distinction is what has to be folded back into itself as we wind back the clock to any vague and undivided initial conditions.

    It is the unity of opposites argument. Dialectics. And I don't see that you are posing your own account this way.

    The infinite is opposed to the infinitesimal. They represent the limits on the dichotomy represented by the notions of the ultimately continuous and the ultimately discrete. So a symmetric initial conditions would fold this distinction back into itself. It would be a state that is neither infinite nor infinitesimal. Neither continuous nor discrete.

    That of course sounds rather mystical. But it actually maps pretty well to the Planck scale which was the start of the Big Bang. The geometric extent of spacetime was infinitesimal - as little as it could possibly be. While the energetic content of spacetime was infinite - as hot or dense as it could be.

    So there is a logical formula to follow here.

    You, on the other hand, are imagining a linear extrapolation. You start with some limited amount of something and multiply it until it grows to be unbounded. Time, creativity, intelligence, order and being are all finite and definite properties, so why can't they be - individually - infinite?

    So nothing is being folded back into itself to heal a symmetry-breaking. There is no dissolving of the crisply divided to arrive back at a shared primal origin. The metaphysical operation you have in mind is instead turning a limited substance into an unbounded substance.

    Instead of dissolving hylomorphic being by folding form and matter back into themselves via a loss of all distinctions, as in the notion of an Apeiron which is just pure fluctuation, you are accepting the substantial state and extending it without limit. It loses its located particularity by being rendered absolutely general rather than by being dissolved back to a vagueness.

    The Peircean model is firstness => secondness => thirdness. Or vagueness => particularity => generality.

    So sure, you can make absolute generality your initial conditions rather than your final outcome. But that then rules out a developmental logic.

    You can see this tension playing out in theistic attempts to imagine divine immanence. Is God there at the beginning or realised at the end? Is God the creating intelligence who decided to construct a Cosmos for some reason, or is the Cosmos, through its evolution, the eventual realisation of Godhood?

    Peirce's metaphysics argues that the development of the Comos represents the universal growth of reasonableness. The beginning was an unintelligible chaos - meaningless tychism. But that couldn't help but develop patterns and order. Habits emerged. The Cosmos started to self-organise and become intelligible. So the Cosmos is on a journey towards maximal "reasonableness". To the degree you want to read divinity into the story, the "designing intelligence" is simply the semiotic machinery - the fact of habit-taking - by which chaos can become completely ordered in a general or global fashion.

    So this would be divine creation or divine intelligence of the most limited kind. Especially right back at the beginning. And even at the end, it only manifests as some general state of order. It is not intelligence as we mean it - a mind cracking problems for self-interested reasons. It is simply a mechanism - semiosis-driven self-organisation - extrapolated to the most global possible scale of being.

    Again, modern science confirms this particular metaphysics. The Big Bang is self-organising its way to its Heat Death. The Planck scale symmetry breaking will become eventually as broken apart as it can physically be. It will arrive at the stasis of an anti-de Sitter void, a fully thermalised and unchanging dead universe.

    Creativity and intelligence and mindfulness as we mean it are just passing negentropic eddies in this general flow towards a maximum entropy condition. We are not what creation is about. Even if we exist only by contributing to that entropification project.

    So to extrapolate from us is metaphysically unjustified. At least if we are following a metaphysics that is based on the logic of dialectics.

    And given that metaphysical dialectics proves itself to work, we should be brave enough to follow it all the way to talk about the beginning and end of substantial existence itself.

    It is fine that you make your extrapolation argument that starts with the particular and abstracts to the general. But an unbounded amount of some stuff - like time, intelligence, creativity, whatever - is not actually a shedding or dissolving of boundaries. It is only a generalisation that leaves you with an unlimited quantity of that very stuff. The stuff is still bounded, still substantial and definite, even if you are imagining it to be actualised in some infinite quantity.

    That's the problem. Stuff is hylomorphic. Definiteness is defined by the existence of a dichotomy - the unity of a complementary pair of bounds or limits. Your extrapolation only multiplies the quantities. It cannot dissolve the actual qualities in question. And so talking about an infinite amount of something fundamental solves nothing, just multiplies your causal difficulties.

    A finite amount of intelligence or creativity is easier to explain than an infinite amount. And at least a finite amount, if multiplied enough, could become an infinite amount - using our metaphysical maths.

    But to solve the problem of creation, the question of how existence could bootstrap into being, you need to be able to dissolve substantiality itself. You must undo the very notion of a quality.

    And dichotomies - in defining a reciprocal or inverse relation - can do that. If you multiply x/1 by 1/x, you get 1. You can unmake your perfect asymmetry and recover a perfect symmetry. So now the multiplication goes in the right direction. The particular become not the general but the vague. You can no longer tell one pole of being from the other. Their particular qualities have been merged - mutually annihilated - to become again a featureless one-ness of unity of opposites.

    So we have here two views that can be defined as mathematical operations. The metaphysical claims can be made highly precise.

    The question then is which one is actually doing the trick? And which metaphysicians have talked about the opposed alternatives the most clearly?
  • Do numbers exist?
    The way this started IIRC is that you accused me of being a dualist and have then proceeded to make a dualist argument for the past several posts.fishfry

    There's a difference between substance dualism and my dialectical or semiotic approach.

    Well, information processing is a TM. That's the technical definition.fishfry

    If that were true, computation becomes a physical impossibility. The technical definition requires the physical manipulation of an infinite tape. Those are quite hard to come by in the real world.

    So yes, we can pretend. We can build pseudo-TMs that paper over this embarrassing fact with virtual machine architectures. All you have to worry about now is what you do when you exhaust 64-bit memory addressing, and then 128-bit, etc.

    Again, you speak so confidently about neuroscience and computer science. But yet you seem to confuse theoretical constructs with real world practicalities. As was my point, TMs define an ideal limit. And thus a literal TM is also physically unrealisable.

    But that doesn't prove that minds work that way. Only that NNs have been doing some amazing things.fishfry

    I can't take you seriously when you make such weak arguments. Of course it is evidence that we are getting at something central to the functional design of the brain. Just as being able to mechanise bird flight would have been evidence we were capturing the essence of the way birds fly.

    And just as we instead built fixed-wing planes as the unsubtle brute force alternative, so computers are the familiar clunky von Neumann architectures they have been since computing got properly started. There is nothing biologically-inspired about the design. Yet they do the job - given our limited practical purposes of just getting around or automating various tasks.

    In the beginning, we tried to program chess algorithms with expert knowledge. (You remember the expert systems movement I'm sure). That got the algorithms to a certain level. But to achieve mastery of the game, the designers gave up trying to teach the machine strategy. They just turned the NN loose and let it train itself.fishfry

    Yeah. I do remember. And NNs were pushed before that. In the beginning, more naturalistic architectures were being suggested. Check out cybernetics. But then symbolic processing became the 1980s fad. Lisp machines and all that. I happened to edit a computer journal at the time expert systems were getting hot and hyped. The history of all this is familiar.

    I objected to your claiming that neuroscientists think the mind is a computer program. Which is the same exact thing as an "informational process" even though you keep claiming it isn't.fishfry

    Stop misrepresenting me. I didn't say neuroscientists think the mind is a programme. And information processing is more broadly defined than by universal Turing computation. Before digital there was analog computation or a start. Learn your history and stop making a fool of yourself.

    And by admitting that when it comes to brains, NN's are at best an analogy, you are conceding my point. Brains aren't NNs. You just agreed that they're only analogies to NNs.fishfry

    It's not an admission. It's my point. NNs are successful models of the brain's essential functional architecture.

    You talked vaguely of "biochemical processes". Well science prefers to talk precisely. And it seeks to understand the basic trick of the brain in terms of some replicable informational architecture.

    But hey, I've lost interest. If all there is to do here is to keep correcting your misrepresentation of my arguments, that is really a waste of time.
  • Why does evolution allow a trait which feels that we have free will?
    Hmm. Are you saying that Kant's categorical imperative IS the thermodynamic imperative? Try reading your own source perhaps?
  • Why does evolution allow a trait which feels that we have free will?
    What's a desciple?

    And why would you imply that Kant might have to be either accepted or rejected in his entirety. Wouldn't that be a rather religious approach on your part?
  • Why does evolution allow a trait which feels that we have free will?
    It is just your biased view.Rich

    Full credit should be given to Kant for putting the story in motion.Rich

    So it is either just me or just Kant now?

    Let me know when you decide who to blame. >:O
  • Thoughts on Epistemology
    But if there can be no coherent skepticism about our hands existence, then to say that we know that they exist is incoherent as well. If Moore gives perceptual evidence for the existence of hands, then he accepts skepticism as coherent.Πετροκότσυφας

    Well said. Belief can't be belief without the possibility of doubt. Why would he treat waving his hands about as a demonstration of anything unless it was not just a factual assertion but a counterfactual one.

    Belief and doubt go together. We can't talk about the presence of the one in the absence of the other.

    So to say a cat believes X is to say a cat could doubt X at the same intellectual level. Pre-linguistically, there is no problem with that kind of belief matched with that kind of doubt. Cats can learn to be sceptical of their owner's actions just as much as trust them.

    But then language is another level of belief~doubt semiosis. And formal logic yet another.

    The real complaint boils down to a crossing of levels. At a pre-linguistic or biological level, we don't doubt those are our hands that wave about in front of our eyes exactly as we will and expect. There just isn't a chance of counterfactuality on that score - unless Moore was surprised to discover he was waving flippers or blocks of cheese.

    So to claim intellectual doubt about the existence of your hands is to claim a higher-order counterfactuality about something which at its "proper level" just isn't lending itself to such counterfactuality.

    I just said Moore's hands might be flippers or blocks of cheese. He might be dreaming or hallucinating. So linguistically, counterfactuals come thick and fast.

    But pragmatically, we can recognise a basic illegitimacy of this kind of semiotic level crossing. We are importing the counterfactuality of a higher order where the counterfactuality is just not there at the level being thus challenged.

    So the Wittgenstein-flavoured pragmatism is right for the wrong reasons. Or reasons that are poorly articulated.

    The "theory of truth" issue is that all belief is secured against its own counterfactuality - but properly speaking, by counterfactuality of the appropriate order or semiotic level.
  • Why does evolution allow a trait which feels that we have free will?
    ...your small contribution was the magical Thermodynamic Imperative.Rich

    Thanks for the credit. But that's just mainstream science really. The stuff you "grew out of" once you took up astral transportation and whatnot.

    Out of curiosity, why aren't you assailing folk with your quantum holographic mind projection theories so much these days? Too "sciency"?
  • Do numbers exist?
    I don't believe that intelligibility can extend to fundamentality. So, whatever names we use to denote it: substance, God, the Real, Firstness, the noumenal, the Will, the Apeiron, Buddha Nature and so on, will, with all their associations and connotations, be tools to relate them to our various systematic understandings in the intelligible world, the 'World as Idea' as Schopenhauer calls it.Janus

    My argument is that intelligibility can approach it in the limit - as its own "other". So intelligibility can define the unintelligible as that which it is ultimately not.

    And because we know intelligibility to exist, then we know that - whatever else - its unintelligible ground had to contain intelligibility as its potential. So we can actually know something usefully definite about fundamental unintelligibility.

    This is apophatic reasoning. But hey, in metaphysics that is unreasonably effective. ;)

    So, the idea of tychism is really just a dialectical negation of the idea of regularity, stability, concreteness; in short of 'being something'.Janus

    Exactly. We recover the pre-dialectical through dialectics itself.

    Your talk about the fundamental is just dialectics. If we and the Cosmos are an effect, therefore there was a cause. If we and the Cosmos are emergent, then something was the more fundamental.

    So the question of creation and being is always going to be dialectical and apophatic. You then need to scout around the history of metaphysics and see who does the job the most rigorously in this regard,

    Spinoza's substance was not thought by him to be "anything", but more like being everything and nothing, inasmuch as to be anything is to be a mode of substance. Hegel similarly said that pure being is close to being pure nothingness.We find apophatic notions of God or Buddha Nature that can be traced back thousands of years. So we can say of Tychism, as Hegel says in another context, that it is the "same old stew reheated".Janus

    I've always said this same old stew has been on the back-burner since the dawn of metaphysical thought. I give full credit to Anaximander with his system of apeiron and apokrisis.

    And having checked out many thinkers, Peirce just keeps surprising me with the completeness of his approach. He sorted it out at a fundamental logical level with his triadic model of development. He put the intelligible into the intelligibility.

    If you can point out a defect in his analysis, have at it. But telling me others said similar things is not a criticism, is it? My claim is he said it best.

    or, on the other hand, that it is the emanation of an unfathomable, infinite intelligenceJanus

    So the great unintelligible intelligibility that blindly chose? Does posing an actual contradiction as the origin of being help your case?

    Is talk of "emanation" not just hand-waving dressed up in a fancy word?

    he point of this is that the emergence of concrete somethingness as a process cannot be intelligibly traced back into firstness, because that is where intelligibility ends. We cannot say what is the symmetry of firstness that is broken to produce secondness, unless we impute an intelligence (albeit of an unfathomable order) to firstness, an intelligence of which our intelligence is a temporal reflection.Janus

    That just isn't a logical argument.

    If firstness is where intelligibility ends, then intelligibility is (apophatically) defining it. And for me - given that my worldview is based on the emergence of constraints - apophatic is good. It is fundamental itself.

    But you are having to resort to paradox and self-contradiction. You have to talk about intelligences that are unfathomable. You are having to talk about complexity - rational structure - being actually present when there is meant to be only a state of fundamental simplicity.

    I just don't get how you can prefer blatantly self-contradicting positions when the alternative is so logical and elegant.

    That is the limitation of Schopenhauer's system; it is inexplicable that an ordered Cosmos can be the expression of a blind will. The same goes for any system that thinks firstness as a blind chaos.Janus

    Well yes. A blind intelligence is a nonsense. But a blind chaos isn't. It would seem definitional of chaos that it lacks rational structure. So again, I'm just not understanding how you can really believe your own line of argument here.

    I get that you are psychologically committed to some notion of a creating God. But so far you are not revealing any hole in a Peircean process philosophy perspective.

    Creating gods seem necessary to a certain brand of logic - the one that believes in mechanical or concrete chains of cause and effect. But that is the very logic that leaves out formal and final cause so as to describe the world solely in terms of material and efficient cause. The logic leaves the blank - the absence of formal and final cause - that a creating intelligence then "naturally" has to fill.

    So really, in my view, you are just responding to an obvious hole in reductionist cause and effect thinking. It leaves out formal and final cause right from the beginning. So formal and final cause is what you know must be jammed right back in that blank slot.

    But Peirce - and all the other systems thinkers and natural philosophers since Anaximander - have a larger dialectic understanding of logic. Formal and final cause have their proper place in the metaphysical system. The blank space left by reductionism is filled by the logic of holism.

    And now you don't need some purposeful and transcendent creator. The Comos can spontaneously self-organise out of pure possibility. With Firstness or Apeiron, there is just nothing to prevent that happening, and so it does.

    And retrospectively, the outcome will be judged optimal. The Cosmos might start out trying to express every concrete option, but then with all the options in self-competition, the variety of ways of being will be reduced to the outcome that proves the most effective (at being enduring and continuous - or synechic as opposed to tychic in Peirce's jargon).

    I don't see how you can deny the simple logic of this application of natural selection to cosmic evolution. Scientific cosmology is now based on this very metaphysics - the Big Bang as a collapse of the universal wavefunction. So it is not as if we lack physical evidence for it. Quantum mechanics tells us classical reality is emergent from a "sum over histories" or path integral. Even a particle gets from A to B by "taking every possible route", and then the actual route is whatever turns out to be the "least action" or energy-optimising path.

    So logic tells us the right kind of metaphysical answer. Philosophy of science tells us why reductionist science left the feeling of there being a blank so far as formal and final cause are concerned. And now modern science itself has filled in that blank (or is trying to) by a holistic model in which classical reality emerges from naked potentiality coupled to natural selection.

    In the face of all this, you still prefer paradoxical stories about unfathomable intelligences, blind choosing, and "emanations"? Does that really sound like strong metaphysics?
  • Why does evolution allow a trait which feels that we have free will?
    I'm here to discuss philosophy, and you apparently have no interest in that.JustSomeGuy

    Rich is here to represent the new age loopies - part of the site's diversity initiative. Just ask him about holographic quantum mind projection and see what he actually endorses. :)
  • Why does evolution allow a trait which feels that we have free will?
    Illusion of free will is not like a meme. You experience it.bahman

    Get back to basics. The sense of self is a perceptual contrast the brain has to construct so as to be able to perceive ... "the world". Even our immune and digestive systems have to encode some sense of what is self so as to know what is "other" - either other organisms that shouldn't be there, or the food the gut wants to break down. And so too, the brain has to form a sense of what is self to know that the world is other.

    A second basic of the evolved brain is that it is needs to rely on forward modelling the world. You probably think the brain is some kind of computer, taking in sensory data, doing some processing, then throwing up a conscious display. Awareness is an output. But brains are slow devices. It takes a fifth of a second to emit a well learnt habitual response to the world, and half a second to reach an attentional level of understanding and decision making. We couldn't even safely climb the stairs if we had to wait that long to process the state of the world.

    So instead, the brain relies on anticipation or prediction. It imagines how the world is likely to be in the next moment or so. So it is "conscious" of the world ahead of time. It has an "illusion" of the next split second just about to happen. That creates a feeling of zero lag - to the degree the predictions turn out right.

    And this forward modelling is necessary just to allow for a continual perceptual construction of our "self". We have to be able to tell that it is our turning head that causes the world to spin, and not the other way round. So when we are just about to shift our eyes or move our hand, a copy of that motor instruction is broadcast in a way that it can be subtracted from the sensory inputs that then follow. The self is created in that moment because it is the part we are subtracting from the flow of impressions. The world is then whatever stayed stable despite our actions.

    It is not hard to look at the cognitive architecture of brains and see the necessary evolutionary logic of its processing structure. And a running sense of self is just the flipside of constructing a running sense of the world.

    Then on top of that, brains have to deal with an actual processing lag. And the best way to deal with that is to forward-model the shit out of the world.

    Then on top of that, it is efficient to have a division of labour. The brain wants to do as much as it can out of learnt habit, and that then leaves slower responding attention to mop up whatever turns out to be novel, surprising or significant during some moment.

    That leads to consciousness having a logical temporal structure. You have some kind of conscious or attention-level set of expectations and plans at least several seconds out from a moment. About half a second out, attention is done and learnt, well-briefed, habit has to take over. It does detailed subconscious predicting and reacting. If someone steps into the road while you are driving, you hit the brakes automatically in about a fifth of a second. After that, attention level processing comes back into it. You can consciously note that thank god you are so quick on the brakes, and what was that crazy guy thinking, and why now is he looking angry at me, etc.

    So [conscious prediction [subconscious prediction [the moment] subconscious reaction] conscious reaction].

    This is all proven by psychological experiment. The whole issue of reaction times and processing times is what got experimental psychology started in the late 1800s.

    Where does human freewill come into it? Well what I've outlined is the evolution of the cognitive neurobiology. The basic logic is the same for all anmals with large brains. They all need to construct a running sense of self so as to have a running sense of what then constitutes "the world". They all have a division of labour where they can act out of fast learnt habit or slower voluntary attention.

    But humans are different in that we have evolved language and are essentially social creatures mentally organised by cultural evolution. Yes, memes.

    So now our perceptual sense of self takes on a social dimension. We learn to think of "ourselves" in terms of a wider social world that we are representing. We learn to "other" our biological selves - this running perceptual self with all its grubby biological intentionality - and see it from an imagined social point of view. We learn to be disembodied from our own bodies and take an introspective or third person stance on the fact we can make choices that our societies might have something strong to say about.

    So freewill is a social meme. It is the cultural idea that being a human self involves being able to perceive a difference between the "unthinking" selfish or biologocally instinctual level of action and a "thinking", socially informed, level of self-less action.

    An animal is a self in a simple direct fashion - a self only so far as needed to then perceive "a world". A human, through language, learns to perceive a world that has themselves in it as moral agent making individual choices. That then requires the individual to take "conscious responsibility" for their actions. Every action must be judged in terms of the contrast between "what I want to do" and "what I ought to do".

    So the idea of freewill is an ideal we strive to live up to. And yet the temporal structure of actual brain processes gives us plenty of dilemmas. We do have to rely on "subconscious" habit just for the sake of speed and efficiency. The gold standard of self-control is attention-level processing. But that is slow and effortful. However - as human culture has evolved - it has set the bar ever higher on that score. As a society, we give people less and less latitude for sloppy self-control, while also making their daily lives fantastically more complex.

    A hunter/gather level of decision making is pretty cruisey by comparison. You go with the flow of the group. Your personal identity is largely a tribal identity. You get away with what you can get away with.

    But then came institutionalised religion, stratified society, the complex demands of being a "self-actualising" being. A literal cult of freewill developed. The paradoxical cultural demand - in the modern Western tradition - is that we be "self-made".

    So sure, there must be some evolutionary logic to this. There must be a reason why the freewill meme is culturally productive. But the point also is that it is a psychologically unrealistic construct. It runs roughshod over the actual cognitive logic of the brain.

    We just shouldn't beat ourselves up for not being literally in charge of our actions at all times. We are designed to be in some kind of flow of action where we let well-drilled habit do its thing. And of course our minds will wander when we are being expected to consciously attend to the execution of stuff we can handle just as well out of habit. The idea that we can switch our concentration off and on "at will" just cuts against the grain of how the brain naturally wants to be. Attention is there for when things get surprising, dangerous, difficult, not for taking charge of the execution of the routine.

    So "freewill" sits at the centre of so much cultural hogwash. There is good cultural reasons for it as a meme. It is really to modern society's advantage to have us think about our "selves" in this disembodied fashion. It allows society to claim control over our most inadvertent or reflexive actions.

    But it is also a demonstrably unhealthy way to frame human psychology. If we just recognise that we have slower voluntary level planning and faster drilled habitual responses, then this unconscious vs conscious dilemma would not create so much existential angst.

    We are not a conscious ego in possible conflict with an unconscious id (and also under the yoke of a social super-ego). Our "self" is the skilled totality of everything the brain does to created a well-adapted flow of responses to the continually varying demands of living in the world - a world that is both a physical one and a social one for us as naturally social creatures.

    The actual freewill dilemma arose because Newtonian determinism appeared to make it paradoxical. If we are just meat machines, then how could we be selves that make our own rational or emotional choices?

    But physics has gone past such determinism. And the very fact that the brain has to forward model to keep up with the world means that it is not being neurally determined anyway. Its knowledge of how the world was an instant or two ago is certainly a constraint on the expectations it forms. But the very fact it has to start every moment with its best guess of the future, and act on that, already means we couldn't be completely deterministic devices even if we tried.

    Universal computation is logically deterministic. A programme - some structure of set rules and definite data - has to mechanically proceed from an input state, its initial conditions, to an output state.

    But the brain is not that kind of computer. So it is neither physically deterministic (as no physics is that in the LaPlacean sense), nor is it computationally deterministic.

    Thus "freewill" just isn't a real ontological problem. There is no metaphysical conflict. (Unless you are a dualist who believes "mind" to be a separate substance or spirit-stuff. And of course there are many who take that essentially religious view still. But for psychological science, there just isn't an ontological-strength problem.)
  • Do numbers exist?
    But isn't it true that just because we can cleverly simulate an approximation of certain aspects of the human mind, that this does not necessarily mean that this is literally how the human mind works?

    In other words we've invented flying machines; but that doesn't mean we've discovered the mechanism by which birds fly.
    fishfry

    This ignores the fact that the flying machine designers quickly gave up trying to copy the flapping wings of birds and instead focused on a non-bird model of flying machines. The flapping did not prove "unreasonably effective".

    Whereas the opposite is the case with NNs. Having got programmable computers, it was the case that even just emulating biologically-inspired information processing architectures was "unreasonably effective" for certain tasks, like pattern matching.

    So that is a particularly inapt comparison with which to make your case.

    If I'm making a physicalist (but not computationalist) argument, then I must admit that we are machines.fishfry

    So an organism is a machine? You seem out of touch with biology.

    Sorry I must have missed that. Link again please?fishfry

    Artificial Life Needs a Real Epistemology - H. H. Pattee
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.18.1316&rep=rep1&type=pdf

    Of course there is some physicalist understanding of what brains do, even though our current state of knowledge is quite limited. And since I've said repeatedly that mind [whatever it is] is a function of brain/biochemistry, it follows that there may someday be understanding of it. Why would you think I've said the opposite?fishfry

    So on the one hand you can't even define what you might mean by mind. On the other, you can make confident claims about neuroscience having a quite limited understanding. And you keep reverting to talk of "brain biochemistry" when the question is about cognitive functions.

    Don't you see the inconsistency of one minute admitting to knowing little, the next to be making a sweeping judgement of the whole field?

    That's my understanding of the Church-Turing thesis. If you have a different idea I'd be interested to hear it.fishfry

    That defines computation in the general limit ... if you are computing number theoretic functions.

    So perhaps brains might not be that kind of "computer". Maybe there is not a single arithmetic operation involved in their neural processes. Maybe even "summing weights" is just an analogy for the integrative processes of brain cells. Church-Turing may have zilch to do with neurology. And yet it is still wrong to then attribute neural information processes to "biochemistry".

    And how could you have a view either way without a little more neuroscience to inform your opinion?

    That's exactly why I think we need a revolution in physics that shows us how to go past TMs into some mode of computation that is more powerful than a TM.fishfry

    Given that TMs require no more physics than a gate that can read, write and erase a symbol on an infinite tape, why the heck would we expect new physics to make a difference to Turing universal computation?

    The power of Turing machines is that they need the least physics we can imagine. What more do you want - time travel, Hilbert space, quantum teleportation? That's back to front. It is the virtual elimination of any complicated physics which is the guarantee of the computational universality.

    When you say "information is meaning," that's something I absolutely deny by my definition of information.fishfry

    Who could win an argument against your private definitions?

    So let's stick to the real world of science, maths and philosophy. If you want to talk about Shannon entropy, fine. But then we all know that is based on counting meaningless bits. If we understood the pattern to mean something, then each successive bit would fail to be such a surprise.

    If I know you are transmitting the digits of pi, I could stop you right after you said "3".

    I don't think you can claim that information is meaning. Information is meaningless. Humans give meaning to information. Isn't that true?fishfry

    You don't get it. Information theory defines a baseline where the meaning of a bit string is maximally uncertain. Each bit says nothing about the following bit. Then from that baseline, you can start to quantify the semantics. You can derive measures such as mutual information that speak to the information content.

    I don't think you can claim that information is meaning. Information is meaningless. Humans give meaning to information. Isn't that true? If I say I saw a "cat," the symbols by themselves convey know meaning. It's humans, English-speaking ones at that, who say that the word cat stands for a furry domesticated mammal that's not a dog.fishfry

    That's one of the advantages of a semiotic approach to the whole issue. It recognises that there is a modelling relation involved. A symbol has meaning due to a habit of interpretation. That habit is tied to action in the world. So the informational side of the equation is causally connected to the material side. There is only meaning in relation to the material consequences of any beliefs.

    Again, read Pattee - http://www.academia.edu/3144895/The_Necessity_of_Biosemiotics_Matter-Symbol_Complementarity

    But then you say well yes humans aren't TMs but they are NN's. And you won't come to terms with the fact that NNs are a special case of TMs. NNs are algorithms. So you aren't gaining anything by claiming that humans are NNs and not TMs. We keep going over this point.fishfry

    You keep misrepresenting my argument.

    The significance of an NN would be that it captures something important about brain cognition. That is different from claiming the brain is literally just an NN.

    And you seem confused about algorithms. They are rules for making calculations. So they are something we think it meaningful for a TM to do. They are not the barest syntax of rule following we can imagine. They are semantic actions performed on a machine.

    So already we are into the real world where computation carries extra semantic baggage. The algorithms are intended to represent some actual informational process. This could be just handling a company's payroll or driving a video display. Or it could be an attempt to mimic the connective behaviour of neural circuits.

    A TM is just a universal algorithm runner. How we then exploit that is down to the kind of information processing we think might be meaningful. We have to write an algorithm that seems to perform the task we have in mind. That could be representing brain functions. It could be representing accounting functions or moving image functions. Universal Turing machines have zilch to say about whether we humans are choosing to run usefully realistic routines or just scrambled garbage randomly concocted.

    You are confusing yourself in jumping so interchangeably between talk of TMs, information, computation and algorithms.

    What exactly are we doing that goes beyond mere algorithms?fishfry

    Again, we write the algorithms. They have zilch to do with the universality of TMs. So you can't claim them as "mere". They are intended to represent some meaningful relation expressed as some mathematical operation. They have to perform a function we find useful. Thus they could model a company's payroll, or model the cognitive operations of a brain.

    A payroll model is probably pretty ho hum. But a workable brain model?

    Yes, the map is not then the territory. As someone pushing semiosis - a modelling relations view of "information processing" - you don't have to explain that to me. It is what I've been saying.

    You admit that you are not talking about NNs as currently understood. You are using "NN" to mean whatever it is that humans do, that's not a computation.fishfry

    You are convincing me of your utter unfamiliarity with neural networks in practice. Or even in theory.

    I call bs on that. Not that you don't know some guy, but that he can't back up his system. If it's built out of processors and memory devices then he can back them up just fine with perfectly conventional techniquesfishfry

    In fact it is completely custom hardware. It is not a simulation of a neural net on conventional technology. It is a direct hardware implementation of a neural network.

    I do hope you agree that building artificial machines that exhibit "thinking" in constrained domains is one thing; and that claiming that the human mind works that same way is quite another.fishfry

    Yes, I've spent 40 years being critical of the over-blown claims of computer science. So I am basically skeptical of the usual talk of getting close to building "a conscious machine". I know enough about the biology of brains to see how far off any computer system still is.

    Indeed, I would like it if there was an in principle argument for why no mechanical device could ever simulate the necessary biological processes. It would suit my prejudices. So I am just being honest when I confess that there isn't an absolute argument. The effectiveness of NNs suggests that some level of mind-like technology - as good as cockroaches and ants - may be feasible.

    And remember where this started - your claim that abstract thoughts are biochemical processes. You followed that howler by jumping the other way - saying the mind was in no way the product of informational processes.

    This second misstep was based on your very narrow conception of information processing - one rooted in TMs.

    The reason for the unreasonable effectiveness of TMs is that they are the theoretical limit on semiotic encoding. Semiosis depends on symbols. A TM is the conceptually simplest machine for handling symbol strings.

    A DNA strand can code for a pretty vast array of protein molecules, but that’s it really. Human language can code for a vast array of ideas. That's really powerful as we know. But a TM can implement mathematical algorithms. It can articulate any mathematically-constructable pattern. That is a whole other level of semiosis.

    So yes. TMs are really basic. They represent pure syntactic potential, stripped of all physical constraints as well as all semantic.

    But then we do have to build back the semantics - add the algorithmic structures - to make TM-based technology do actually useful things. Much like DNA has to code for the kind of neural connectivity that can do actually useful things for organisms.

    Semiosis recognises the essential continuity here. It sees the ontological difference that codes or syntax makes, the new "unphysical" possibilities they create.

    Maybe that's the "physics revolution" you are talking about. I certainly think that it is myself. It explains the information theoretic and thermodynamic turn now happening in fundamental physics I would argue.
  • The Ontological Status of Universals
    One author, one meaning... or else equivocationcreativesoul

    Rubbish. Speech acts are intrinsically creative. No words ever exactly capture the meaning I had in mind, despite even the opportunity for rewriting. But then the forced concreteness of having to have found some formula of words paves the way for further departures in thought. More refined interpretations arise.
  • The Ontological Status of Universals
    I've already adequately argued my case without subsequent relevant and/or valid objections.creativesoul

    Shall we take a collective vote on that?
  • Do numbers exist?
    I wonder, though, whether Peirce can make sense of the development of reasonable habits in terms of something more fundamental?Janus

    But isn't that the problem? The way you phrase it suggests that you have certain beliefs about the nature of fundamentality.

    The semiotic view is that tychism - chance or spontaneity - is the most fundamental starting point because it has the least regularity or stability. It is the least concrete possible state. So it is not a "something" - some more basic level of substance. It is a state of unfettered anythingness. It is pure instability without habit or regulation.

    This then means reality arises by a restriction on a fundamental anythingness. So Peirce has a metaphysics we can recognise from Anaximander. And one that also is now straight out of modern quantum physics and thermodynamics.

    So any metaphysics that tries to get something from nothing does have a problem. But reducing everything to just something is easy, by contrast.

    And because we know something does indeed exist - us and our cosmos - we already know that nothingness couldn't have been the case. So whatever our metaphysical reasoning leads us to as the "primal condition" has to be the best answer we are going to get. Which is why we would believe in Firstness, vagueness, the Apeiron, a quantum foam, or whatever best represents a condition of chaotic symmetry, a realm of utterly unstable fluctuation.

    This foment may indeed sound a little like a primal raging will to exist. But does any connection to Schop go deeper than that?

    Schop could say that Will comes to manifest in ever more habitual ways, which become the more reasonable as the world as idea unfolds; he could say that Will gains its increase by establishing habitual manifestations.Janus

    Yes. Maybe Schop could be mapped to this kind of "anythingness" based metaphysics. I mean it is the general alternative option that runs through all creation stories.

    Either there was nothing, and existence got created. Or else existence is a result of a disorganised everythingness that got regulated.

    I just think that Peirce developed the best account of the mechanism for a self-organising everythingness. He realised it had to be a triadic tale, not merely a dualistic one.

    For me, when you describe Schop's Will, it seems to be trying to stand for two things at once - both the material spontaneity and the formal constraints. It is the primal source of the energy and also the end towards which that energy is directed.

    Again, a triadic view allows everything to arise emergently. It stands against the usual view where something can only come from something - the view that presumes substance to be a conserved quantity in the process of creation. Peirce's metaphysics is an open systems view - one which starts in the unlimited and develops its concreteness through self-bounding or self-closure.

    So the question about what is "fundamental" is flipped. Any beginning - and any ending - have to be the least concrete kinds of causes imaginable. As the concrete is what arises in the middle between them.

    That means the beginning is a Firstness or vagueness. Just pure fluctuation. And the ending is Thirdness or generality. Just the fixity of "a habit". A state where all differences are assimilated to a common idea.

    So, as I say, the beginning and the end are real (ie: not nominalist). But they are logically opposed (one being vague, the other general) and both are arrived at as being as "insubstantial" as can be imagined.

    The Will, by contrast, seems to exist as an efficient cause that drives the action. It is definite at the start, and gets to where it alway intended by the end.

    It just lacks the formal dichotomous division of the vague and the general (as that to which the principle of non-contradiction and the law of the excluded middle respectively fail to apply). And it lacks the insubstantiality that can then stand as the contrast to the actuality, the particularity of secondness, that arises emergently "in the middle" - in good old hylomorphic fashion.

    So Peirce has deep roots that are sunk right into logic itself - the laws of thought. It connects in direct fashion to Aristotelian metaphysics as well.

    I don't get any of this kind of rigour from Schop. But then I've never looked into him that deeply.
  • Do numbers exist?
    So, no idea of time, space, causality, differentiation and so on can be coherently applied to Will.Janus

    I think Peirce has a similar notion of the experience of "firtsness", but maybe I have misunderstood.Janus

    This is the problem for me. Peirce makes sense of causality as the development of reasonable habits. I can follow that as an intelligible metaphysics.

    But not Schopenhauer. In the end, I can't piece together a logical description of a coming into being as a concrete self-organising process. The bits don't fit together.

    Peirce liked Schelling. I can see why.

    Peirce at first disliked Hegel but then came to appreciate him. Again, I see why.

    Peirce seems to have been silent on Schopenhauer. Perhaps Schop just wasn't systematic enough for there to be a real metaphysical thesis to critique?
  • Do numbers exist?
    Not sure he said it best. But yep. Materiality is located action - action with a direction.

    Then the other half of the causal story is the global form which constrains actions to locations and directions.

    To reconnect to the OP, that is why I would be a realist about global constraints as well as local degrees of freedom. The two together make for a reality that has an observable structure and regularity.
  • Do numbers exist?
    But no, NN's are not "mind-like." It's starting to become my mission in life to explain to people why NN's are *NOT* "mind-like."fishfry

    Fine. I would agree that NNs are not biologically realistic in some fundamental ways. But also, NNs are an attempt to be more biologically realistic in some important structural or information-processing fashion.

    So this could easily be an argument over whether the glass is half full or half empty. That is why the epistemology of NNs demands especial care in a Philosophy of Mind discussion.

    Airplanes are stunningly effective at flying, yet birds don't work that way.fishfry

    But what is the "unreasonably effective" feature they share? Is it an aerofoil wing that creates lift?

    I agree that human machines are just basically different from biological organisms. However again, you need some actual general metaphysical argument to spell out the precise nature of that difference. And that is what I'm talking about with biosemiosis, autopoiesis and other "buzzwords".

    You need a theory of the distinction if you want to say anything definite on the matter. And you seem quite dismissive of the literature here.

    You agree with me that perhaps the explanation of mind must await the next revolution (or two) in physics?fishfry

    No. I was being sarcastic.

    Physics is already undergoing the right kinds of revolution anyway. Thermodynamics is becoming foundational. Physics is becoming information theoretic. Holism and emergence can now be modelled in a variety of ways.

    So Newtonian materialism is out-dated. Existence can be understood as a dissipative process. And that is a framework which biology and neurology slot straight into.

    I don't know. Perhaps it has to be biological. Perhaps not. I don't think it's relevant to my argument.fishfry

    Well I would say this shows you don't have an appropriate general metaphysical framework. It has to be a central issue if you are arguing either for or against artificial life and mind.

    That is why I urged you to read that Pattee paper.

    Whatever mind is, it's not a computation.fishfry

    That's a hand-waving statement, so not much use in a serious debate here.

    At the moment I have no clue what you even mean by "mind". I get the impression it is probably the standard dualistic substance ontology - a sensing stuff, a bunch of "feels".

    So we wouldn't even be on the same page for a serious discussion in terms of a comparison of neurological processes and computational mechanisms. You are likely already convinced that there is no physicalist understanding of what brains do.

    Hmmm ... that's kind of an interesting technical question. So there's the neural wetware of the brain, and you are asking me if it is possible that SOME informational process is implemented.

    Um ... well ... sure. Why not. If I blink my eyes at you in morse code I'm digitizing my thoughts. For that matter, I can execute the Euclidean algorithm with pencil and paper. So yes, wetware can certainly implement computational processes. But not everything wetware does can be explained by a computation.
    fishfry

    You seem to entirely miss the point.

    You appear to believe that TMs completely define all possible notions of computation, information and semiosis. And so any question about "information processes" or "processing architecture" gets immediately translated into a TM view.

    But just maybe TMs are a very tiny fragment of a much larger landscape.

    Of course, there is something immensely powerful about TMs in being (almost) pure syntax/no semantics. In short, they are (near) perfect machines. They represent a completely constrained and rule-bound universe. And so they leave out all the "messiness" of the physical and biological world. They leave out, in fact, information as traditionally understood - ie: information as meaning.

    It is like the syntax of Boolean logic. To reconnect to the OP, there is something "unreasonably effective" about reaching the limits on a de-semanticised view of reality - one where we just model reality in terms of its simplest syntactical rules.

    So TMs and Boolean logic idealise reality. They abstract away the materiality or particularity of physicalist semantics to arrive at the simplest, sparest, syntactical forms.

    Great. Defining the ultimate limits of reality is what it is all about. But maybe there is such a thing as over-simplification.

    Machines are rule-bound artificial systems. And so they can't construct themselves. They can't give themselves purposes , they don't have autonomy. Machines are useful to us humans as it is we who get to design the machines, build them to serve some purpose.

    However organisms are systems with evolved designs and purposes. They have an irreducible causal complexity. And that is their "secret". There is always semantics - or semiosis - involved.

    So the whole mechanical paradigm of nature is flawed at root if it excludes the basic causal complexity of real living and minding creatures.

    We can see that TMs and Boolean logic leave out formal and final cause. Well they leave out material cause as well. All they are is pure syntax. They can be used - by an organism with a purpose and a design - to represent a formal system of entailment. They can capture the description of a syntactic structure. But being such a rarified representation of reality, the computational patterns that result have an extreme real-world brittleness.

    In practice, any computer program or computer circuit is incredibly prone to bugs. Just one broken link and the whole finite state automata grinds to a halt.

    Organisms by contrasts not only thrive on physical instability, their very existence depends on it. Life and mind arise on the "edge of chaos" as where things are perched on the verge of falling apart, that is where the slightest extra informational nudge can push them instead into falling together.

    So life and mind thrive on material dynamism. TMs and other machines only flourish where all the uncertainties of the real world have been managed out of existence by their human designers. Mindless routine following becomes possible where minds have made that a safe thing to do.

    Anyway, my point is that any biologist or neurologist would understand that computers and organisms are different in this fundamental way. There is a reason why TMs are both such "universal" machines, and also the most biologically helpless of physical structures.

    There is a general metaphysical paradigm that accounts for why brains aren't computers, and yet also, we could build computers that start to have some of that biological realism designed into them.

    A "true" NN has to learn for itself. That's both its advantage and disadvantage. It is essentially a black box to its human owner.

    I know a "mad genius" who has developed one of the currently most advanced neural network computers in the world. It runs his company for him. But he has no clue how it works inside. It grew its own "programme". And if it failed, he couldn't transfer its software to another hardware rack. He can't even do a memory back-up as such.

    But because the memory doesn't work like a traditional TM device, and instead is more like a brain, that is not such a problem as it has natural fault tolerance. The failure of individual links can't corrupt the whole system.

    So yep, the whole NN issue isn't clear-cut. But the field has a history now. Computer science has been exploring the degree to which neurologically realistic architectures can lead to a more organismic notion of a machine.

    We already have a mathematical definition of the most non-organismic one - a TM/Boolean one - as the theoretical limit of a machine that is all syntax, no semantics. So the next question for the engineers is how to start building back in some useful biological realism. And that in turn demands a general metaphysical theory about how to define "semantic processing", or semiosis.
  • Do numbers exist?
    My thesis here is that mind arises from a physical process in the brain; but that it is not a computational process in any way that we currently understand computation. It's not a TM or an NN or a cellular automata or anything else along those lines.fishfry

    It seems curious that it was only just a few posts back that you were trumpeting the mind-like abilities of NNs. So if they were inspired by the "computational" structure of the brain, it is surprising they should indeed be so effective at machine learning, and yet the brain itself would not function along these lines.

    I don't know what the actual mechanism might be ... I think this will take another revolution in physics.fishfry

    Sounds legit.

    No I don't think biochemistry is necessary. Or sufficient. It just "happens to be the case" in this instance. It's possible that machinery might become conscious, so biochemistry's not necessary. And there's plenty of biochemical matter walking around that's not particularly conscious, so biochemistry is not sufficient.fishfry

    So that is a retraction of your original statement coupled to a backtrack on the retraction?

    It is the structure of the matter that matters and not the particular matter. But you don't want to say the structure implements any kind of informational process?
  • Do numbers exist?
    Do you really want to argue that Searle thinks "biochemical processes" are a necessary and sufficient condition of conscious thought?

    It is well know that Searle fluffs around the issue because he has some broke-arse property dualism in mind.

    But he usually talks about neural processes and brain structures as the likely level where first person experience might "pop out" into existence as an emergent property of a third person material world.

    I've not seen him make a positive assertion that consciousness would be emergent just from "biochemistry", sans all that rather suggestive neural circuitry. So you might want to check your understanding.

    Peace out, as they used to say.
  • Do numbers exist?
    But I can do many things that CAN'T be emulated by a TM. Like understand Chinese.fishfry

    Err, yeah. As I was saying.

    But the problem was you began this by claiming biochemistry is capable of things like understanding Chinese. :)
  • On Doing Metaphysics
    All that you know about the physical world is from your experience, in fact all of it is your experience. That's all there is, for you.Michael Ossipoff

    Seems standard...

    There are abstract if-then facts. There couldn't have not been abstract if-then facts. And, just as inevitably, there are complex inter-referring systems of inevitable abstract if-then facts about hypotheticals.

    In fact, there are infinitely-many such complex logical systems.
    Michael Ossipoff

    ...but then no idea what this could mean.

    Is this saying that an assumption of intelligibility - as in the laws of thought - are a precondition to cognition, or something Kantian like that?
  • Do numbers exist?
    Are you actually making the claim that even though a NN can be emulated by a TM, the NN somehow implements semantics?fishfry

    Nope. I made the point that humans and NNs can emulate TMs. (You did claim to be familiar with the CRA?) However that doesn't make either of them TMs.

    I also said neuroscientists find NNs to be biologically realistic models of neural processes. There is no reason to think brains are finite state automata. There is no reason to think they are programmable computers (von Neumann machines). There is no reason to think they are Turing complete. But - given that NNs are inspired by the biology - it is not much of a surprise that NNs implemented even as logic devices show some of the important functionality we associate with nervous systems.

    So NNs are goodmodels. TMs, by contrast, are woeful models of brain function.

    Can an NN have semantics or is it also just a syntactic device? Well, it all rather depends now on how you define semantics. And that is what biosemiotics concerns itself with. One would need a general physicalist theory of semantics to answer the question in some quantitative fashion.

    I would say the NNs built to date aren't really semantic. They are just pattern matching systems. And they require supervised learning, so the semantics are clearly "in the mind" of their human trainers. But arguably they are getting near the abilities of an ant or cockroach.

    I would say that is still only in terms of pattern matching ability. An embodied view of cognition would say that a hell of a lot is still missing in terms of an actual ability to "makes sense of the world" even at that level. NN designers haven't even got their heads around the kind of functionality they need to start implementing as the "learning algorithms" on that score.

    I could say a lot more about the semantic issue, but it's way off topic for this thread.

    The issue was whether maths is Platonically real or a free creation of the human mind. I argued for a third position - one which says the maths that is "unreasonably effective" when it comes to physicalist theories, is so because it describes real physicalist limits on reality.

    So enough of the sideshow. You only turn anything I say back to front anyway.
  • The Ontological Status of Universals
    Creative: "I refer you to my entire post history. Any astute reader perusing that will surely uncover the nature of my heretofore mentioned claim. (Peel me another grape, darling.)"
  • The Ontological Status of Universals
    The usual rambling bullshit instead of any direct answer.
  • The Ontological Status of Universals
    Click on my avatar. Click on "comments" icon. Scroll down looking for comments with this thread title. Read for yourself. Much of the discourse between Wayfarer/Andrew M and myself covers it and it's all fairly recent. All my comments in this thread would be a good place to look... I would think.creativesoul

    I think you give yourself way too much credit for clarity of writing. I didn't understand your comment so I wouldn't even know what other comments might count as the argument that supports it.

    So I argued that a notion of the general vs particular doesn't make sense unless it understood how it is connected to the distinction between the essential (or necessary) and the accidental (or chance).

    Generality is the essence that a collection of individuals would have in common. Their particularity would then be the accidents that are the differences that don't make an (essential) difference to that.

    I illustrated this logical principle in reference to your male duck and non-laying duck examples.

    If you can't make a counter-argument here, then I can only take the view you can't in fact muster one.
  • The Ontological Status of Universals
    I find that your approach presupposes agency where none is warranted. Drop the notions of intent and purpose, then see what happens to what's left of it...creativesoul

    Why would I arbitrarily exclude final cause from nature?
  • Do numbers exist?
    That is:

    * If you claim that mind is a neural net; then you must also agree that mind is a TM.
    fishfry

    Either you understand the difference between emulating a TM and being a TM, or you don't. Either you understand the difference between analog computers and digital computers, or you don't. Either you understand the difference between semantics and syntax, or you don't. Etc, etc.

    Maybe you could start with this famous philosophy of mind argument - https://en.wikipedia.org/wiki/Chinese_room#Chinese_room_and_Turing_completeness

    To understand my biosemiotic take on the issue, this is a nice foundational paper - http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.18.1316&rep=rep1&type=pdf
  • The Ontological Status of Universals
    Where? If so, why not cut and paste it here?
  • Do numbers exist?
    Maybe you don’t realise that snark is pretty routine on your part.

    And I have investigated neural nets. Your posts on the issue reveal you haven’t really.
  • Do numbers exist?
    My understanding is that we can accommodate abstract mental constructs quite easily within physicalism. Abstractions are thoughts, biochemical processes in my brain.fishfry

    I don't believe the mind is a TM and I don't believe real-world NN's are anything other than TMs....

    Bottom line, why don't you just explain to me why you think a real-world NN is anything other than a TM.
    fishfry

    Hmm. So what I have got from this exchange is that you struggle to keep track of your own arguments because you don't actually have a well constructed metaphysical position. And when you encounter someone who does, you bluster and ad hom. Nice.

    And so here now you have diverted the discussion to something that you hope might be safe ground.

    I said that mainstream neuroscience would reject the reductive materialist notion that abstract thoughts are just biochemical processes in the brain. In some fashion - still not fully understood of course - they would be considered informational and semiotic processes.

    You then leapt to the idea that this meant the activities of the brain are computational processes - Turing machine computational.

    I replied no, a TM is a dualistic device. The software is absolutely divorced from the world which gives it rule-bound play any material meaning. It is presumed that the hardware supporting the action has no entropic cost. It is presumed that the input and the outputs of this finite state machine are meaningful to some further intelligence outside it. So a TM is just a syntactic device. It can blindly follow rules. But at no point in its mathematical-strength definition is there any semantics included.

    And then, so far as neuroscientists would consider the brain some kind of computer, it would be like a neural network. Which is different from neuroscientists thinking the brain IS a neural network. Rather, it is neural networks which are like a semiotic relation.

    Neural networks are meant to learn from the world by experience. They don't have a programming language and so they don't have a set of syntactic tokens to shuffle about according to some set of computational grammar. And while they can of course emulate a Turing Machine - just like we can emulate a TM too - that doesn't mean they are TMs. It just means they can follow rules that shuffle symbols without needing to understand anything about what they are doing. Semantics is optional to blind programmatic rule following.

    So you made a wild claim - thoughts are nothing more than biochemistry. Now you want to defend the opposite thesis - thoughts are nothing more than Turing computation. Or no, you realise that is ridiculous. So you want to pretend that is my position instead.

    The circle of mathematics is an ideal circle, a pure mental abstraction.fishfry

    Then you don't seem to be interested in metaphysics even as it touches on the reality of numbers. It appears largely that you reject what physicalism might have to say about "reality" just because looking up "buzzwords" is such a tiresome chore ... when you already have all the answers.

    I was hoping that focusing on the reality of mathematical constants might have got us somewhere. Yet it appears you haven't even really thought about the reason constants emerge as limits on material action in physical systems. So that was a waste of time too.

    Oh well. I was expecting too much, obviously.
  • The Ontological Status of Universals
    There's a marked difference between not being able to draw and maintain a dichotomy and rejecting it based upon grounds of inadequacy...creativesoul

    And your argument is...

    [creative, as per usual, will fail to fill in the blank space where his argumentation was meant to go ;) ]
  • The Ontological Status of Universals
    I find it rather interesting that an entire school of thought and belief has arisen as a means to sophisticate what is nothing more than unsophisticated language use.

    "Ducks lay eggs" is not true. That's plain and simple.
    creativesoul

    And so you have some notion of truth that can’t make a useful distinction between the essential and the accidental.

    If a female duck can’t lay eggs, that is some kind of accident. But it is still a duck because essentially - barring the accident - it would have laid eggs. As well as having all the other duck-defining feratures that count as essential. (In the end, this might boil down to a genetic disposition of course.)

    And then a male duck, if regarded as part of the class of male things, would only lay eggs by some kind of accident.

    It is a basic logical principle. That which is not constrained is free. That which is not essential is still possible by accident. Indeed, that which is not prevented has to happen to some degree if it is a possibility.

    So you are working with a notion of reality that doesn’t pick up this essential vs accidental, or constraints vs degrees of freedom, distinction. That leads to an impoverished logical model of reality. You can’t in fact speak its truth because you can’t handle all its facts.
  • On Doing Metaphysics
    You're asking me which particular statement, if falsified or brought into question, would discredit my proposal. Any of them, I'd say. Falsify one of them, or bring one of them into question.Michael Ossipoff

    Out of curiosity, what metaphysical proposal? There doesn't seem to be one in this thread from you. So a link would be helpful.
  • Do numbers exist?
    Thanks for the lengthy reply.

    What's true is this. Computationalism s the claim that the mind (or the universe, in a more grandiose version) is a computation. Now those neuroscientists who are computationalists believe that thoughts are informational processes; and those who aren't, don't.

    I hope you will agree with me that this is a true statement about the states of belief of neuroscientists, and that this is NOT a settled issue by any means. If nothing else, if mind is a computation, what's the algorithm? When you bring me some computer code and say, "Here, this is how you implement an mind. It's 875,356 of C++. Some grad student figured it out," then maybe I'll believe you. Till then, the burden of proof is on you.
    fishfry

    I'm definitely not claiming computationalism - or at least not Turing machine computation as you seem to suggest. The mainstream neuroscience view - since Sherrington's "enchanted loom" or Hebbs's learning networks - is some kind of neural net form of "computation".

    And more to the point, it is mainstream to emphasise that the brain is involved in informational activity, not merely biochemical activity. Otherwise why is neuroscience interested in discovering the secrets of the neural code, or brain's processing architecture? It knows the biophysics of what makes a neuron fire. But how that firing then represents or symbolises something with felt meaning is the big question. And that can only be approached in terms of something other than a biochemical materialism. It demands a semiotic or information theoretic framework. Which in turn has already considered Turing computation and found it not the answer.

    So broadly speaking, neuroscientists think thoughts are informational processes and not biochemical events. At the same time, they don't think the brain is literally a Turing machine or programmable computer. That might be a helpful analogy, like calling the eye a camera. But just as quickly, the caveats would begin.

    There are important things in the world that are not computations. Like mathematical truth.fishfry

    Computers are machines. They are devices that construct patterns. So yes, of course, human minds seem to operate in a fundamentally different fashion. We can grasp the whole of some pattern. We can understand it "organically" as a system of constraints, rather than as an atomistic construction.

    Our abductive or intuitive approach to reasoning begins with this ability to see the whole that "stands behind" the part. We can make inferences to the best explanation. And then, having framed an axiom or hypothesis, we are also quite good at deducing consequences and confirming by observation.

    So when it comes to mathematical truth, that is what we think we are doing. We notice something about the world. We then leap towards some rational principle that could "stand behind" this something as its more general constraint.

    Turing machines are really bad at making such a holistic generalisation. Neural network computers are our attempt to build machines that are good at implementing this precise inferential leap.

    However if you DON'T believe that mind is a computation, you no longer necessarily have substrate independence. I hope you would grant me this.fishfry

    Yeah. I don't claim complete substrate independence. But then my "computationalism" is a semiotic or embodied one. The whole point is that it hinges on a separation which then allows an interaction.

    A Turing machine does not self-replicate. A Turing machine does not have to manage its material flows or compete with other TMs. But a living thing is all about regulating its physics with information. So an independence from physical substrate (an epistemic cut) is required by life and mind. But only so as to be able to regulate that physics - bend it in the direction which is making the autopoietic wholeness that is "an organism".

    The only way to do that is to execute the algorithm on physical hardware. That is a physical process involving an input of energy and an output of heat. Something a physicist could observe and quantify.fishfry

    Yes, you can measure one side of the computational story in terms of entropy production. But how do you measure the other side of the story in terms of "negentropy" production? The fact that your computer runs either hotter or colder doesn't say much about whether its eventual output is righter or wronger.

    Where does the algorithm itself live? Well it lived first in Euclid's brain. But isn't Euclid's mind a physical process? His abstract thoughts are physical processes, and his thoughts can be implemented as physical processes. But I don't see why we need dualism.fishfry

    We are labouring the point. If you really can't see the difference between syntax and semantics by now, things are likely hopeless.

    You keep talking about the physical events as if they are the informational processes. Of course a neuron or a transistor or a membrane receptor or a speedometer can be described in terms of their "physics". But it is hardly the level of description that explains "the process" which we are interested in.

    To reduce functional or informational processes to atomistic material events becomes a nonsense. Especially for true computationalism. The only time we are interested in the physics of a logic gate is when it doesn't behave like a logic gate - that is when it has some uncontrolled physical process going on.

    So algorithms are extreme mechanistic dualism in fact. You don't even have to run a programme for it to "have a result". The result could only be different if the physics of the real world somehow intruded, And then we would say the computer had a bug. It over-heated or something.

    And maths is kind of like that. We imagine it as transcendent and eternal truths - things that would be true without ever needing the reality of physical instantiation. Pure information. It is crazy to talk of Euclidean maths as existing in some geezer's long dead brain.

    Jeez that sounds a little mystical. You're saying that Euclidean geometry is the midpoint between elliptic and hyperbolic geometry. Yes this is a true mathematical fact, but it is not mystical.fishfry

    Why do you interpret that as a mystical statement? My point was that it is not a mystery because it is what you would expect from principles of physicalist symmetry. If every kind of difference gets cancelled (as the negatives erase the positives) then what you are left with is the mid-point balance. It would be natural to expect "flatness" as the emergent limit state.

    So I'm not going to try to think about this. You have to start somewhere, and perhaps we could agree that for purposes of this conversation, there is the number pi and there is a rock, and that we don't have to consider their quantum relationship to each other, if any.fishfry

    Well it is your choice to ignore what we know to be fundamental in preference for what we know to be emergent.

    I can't agree that it makes for good metaphysics. And I think you just want to avoid having to make a better argument.

    To a number theories, integers are as real as rocks. I doubt Wiles would agree that he's written a work of fiction. Or even give the matter any thought at all.fishfry

    Fine. The philosophical issue here is not the pragmatics of mathematical research. And I even agree that mathematical research - in being an informational theoretic exercise - would deliberately insulate itself from such fundamental metaphysical issues. Maths doesn't really want to even concern itself with geometry - the physical constraints of space - let alone with actual materiality, or the constraints of energy, the possibilities of change. So - as institutional habit - integers are as real as rocks.

    Except they are then ... ideas? Constructs? Thoughts in the head?

    You seem to want it both ways. And that winds up in Platonism.

    That is why my own position is the semiotic one where the integers are the ideal limits on materiality. That is a formula of words that both accepts a strong difference and a strong connection between the two sides of the semiotic equation. Information is real if it is causal. And being an actual limit on material freedom is pretty clearly causal.

    Ooh you are on shaky ground here! Gödel told us that math is NOT an informational process! No algorithm can determine the truth of mathematical statements.fishfry

    See earlier where I spoke about abductive reasoning and our ability to make inferential leaps. Gödel validates my approach here. The failure of logical atomism is the solid ground for the holist. It is why a semiotic approach to reality is justified.

    Yes but you're going all woo-woo about a trivial mathematical fact. Well not trivial, non-Euclidean geometry was a big deal when it was discovered.fishfry

    You mentioned pi. I am just highlighting how the usual woo-woo aspect - the fact that there is just this "one number" picked at random out of all the numbers on the number-line - masks a bigger story. The woo-woo evaporates when you see there is a "material" process that picks out a value for "being flat". Two kinds of possible curvature had a mid-point balance. Pi is a number that emerges due to something more holistic going on. The fact that it emerges "right there" on the number-line is not some kind of weird magic.

    It is even easier to see with other constant like e that are directly derived from growth processes. There the contrasting actions that produce the emergent ratio are in plain sight. It is funny that e should be 2.71828. But then that becomes obvious when it is realised that growth always has to start from some thing that is just itself 1. There is no reason to think of e as anything but natural after that.

    You and Kant. He was wrong. You're wrong. Euclidean geometry's not special. It's just something we seem to have an intuition of.fishfry

    But I am not Kantian, except in a loose sense. I'm Peircean in the way Peirce fixed Kant.

    And I'm arguing flatness is special as the mid-point of opposing extremes of curvature. It has physically important properties too. Only flat geometries preserve invariance under transformations of scale. That is a really important emergent property when it comes to things like Universes.

    It's true that it's the ratio of a circle's circumference to its diameter is pi, but if it were 3 or 47 or 18, you'd be asking why it's that? It's just what it is. The only really interesting thing is that the ratio is always the same no matter what size the circle is! That's the real breakthrough here, that was a great discovery once. [Edit - You made the point that this is only true in Euclidean geometry. Point taken].fishfry

    And as I repeat, it is very important metaphysically that absolute scale invariance only appears at a particular numeric value of pi. That is how a Universe is even possible.

    So you are focused on the triviality of pi being given some particular position on the number line - look guys, its 3.141592653589793238462643383279502884197169399375105820974944592307816406286 208998628034825342117067982148086513282306647093844609550582231725359408 ...

    And that is what makes folk go woo. It seems both weirdly specific and weirdly random. There seems no natural reason for the value.

    But it's a ratio derived from the radius being granted as the natural unit. Let's call the radius 1. Let's get a grip on this weird thing called curvature by starting with the "most natural part of the story" - a line segment. That gets to be "1" on the number-line.

    Well, as I say, once mathematicians woke up to the fact that flatness was a rather special case of curvature, and once physicists in turn woke up to the fact that scale invariance was essential to any kind of workable Universe (its called rather grandly the cosmological principle), well, maybe it is the ratio that should be called "1". A straight line segment is only a natural unit in the context of an already flat space which supports unlimited scale transformations. It depends on the emergent fact of parallel lines or infinite rays being an actual possibility.

    You are really into pi mysticism. What I mean is, what you wrote here is pretty word salad-y. I have to repeat, I only picked pi because it's a good candidate to make the point that numbers are abstract and not physical. I could have made the exact same point with 3, but people have a harder time understanding that 3 isn't any more physical than pi.fishfry

    I am being anti-mystical in pointing out the very physical basis of pi as a number. It is a ratio that picks out a critical geometric balance.

    The number 3 is trivial by comparison. Well there are physical arguments for why the geometry of universes are optimal if they have just three orthogonal spatial directions. But 3 as a member of the integers has no numeric specialness by design. The special or natural numbers are 1 and 0. We see this in the symmetries captured by identity operations. There is something basic or universal when we hit the bedrock that is a symmetry or invariance.

    You would call it a mystical fact perhaps. I see it as quite reasonable and self-explanatory.

    * So to sum up:

    - You are arguing from a computationalist point of view, but I'm not sure what point you are trying to make. Looking back I see that now. Even if I agree with you that mind is computation, there are still numbers and rocks. I possibly did not follow your argument.
    fishfry

    Nope. At least not your notion of computation as Turing machine/programmable computation.

    I take an information theoretic perspective. And more specifically, a semiotic one. In technology terms, neural networks come the closest to implementing that notion of computation.

    And numbers vs rocks is a distinction that relies on a classical metaphysics - one in which the divide between observers and observables does not present an epistemic difficulty. The epistemic cut - the necessary separation of the information from the physics - can be treated as an ontological fact.

    So my positions on both "mind is a computation" and "reality is classical" are the same. Semiotics starts from the view that there is no fundamental ontic division of observers and observables. But that is also the division which must emerge via some epistemic cut. It is the basis of intelligibility. And even the Universe can only exist to the degree it hangs together in intelligible fashion.

    Hence why maths tends to be unreasonably effective at describing the Universe. Or being in general.

    - You are wrong that math is a computation. And like many computationlists, you underestimate or ignore the importance of non-computable phenomena in the world. Remember even Tegmark distinguishes between the mathematical universe hypothesis and the computable universe hypothesis. Computationalism is a very strong assumption.fishfry

    Labouring the point still, but I'm sorry. I'm not a computationalist in the sense you are hoping for. Indeed, that was what I was accusing you of. You seem to believe reality is a machine. An account of physical events is sufficient.

    But yes, you also seem to say the opposite. This is a symptom that your metaphysics is "commonsensical" and not well thought out.

    * Mathematicians do math, not philosophy. My sense is that the vast majority of working mathematicians never give any thought to philosophy. When an engineer is building a bridge, do you want him spending his time contemplating the fact that there is no difference between him and the bridge? Or do you want him calculating the load factors according to state of the art engineering principles?fishfry

    Again, bully for mathematicians. Bully for engineers. Bully even for most physicists (as very few are employed in frontier theory construction).

    But it is curious to be complaining about metaphysics where metaphysics is appropriate.

    And so far you haven't put forward any clear exposition of your own epistemic position, let alone given a clear justification for it. You just hoped to be able to label me with some obviously weak ontology that I spend most of my time arguing against.
  • Thought: Conscious or Unconscious activity?
    As others have said, this is simply a false dichotomy, an over-simplification.

    To be conscious here means to be a mental act that itself is now reportable as a mental act. So it is overt at the level of attention and working memory. It is something that has been "done" and so can be repeated as an action. It has a definite form. It is some phrase just said, or some image just conjured up.

    But then all such mental acts have to begin in some pre-conscious fashion. They must develop from some vaguely felt generality into some specifically articulated form. You can call this the unconscious gestation, but you can also pay attention and catch a comment or image while it is still just a vague "urge". So it is not strictly unconscious. You can be vividly conscious of some thought having just been on the tip of your tongue.

    And then all mental acts, if repeated often enough, can become themselves habitual or automatic. So now they are "unconscious" in a different way as you just emit them in learnt rote fashion without need of gestation or attention. You don't have to make an effort to produce the completed form of some familiar phrase or image. It will just flash into your mind of its own accord due to contextual cues.

    So thinking lives on both sides of this supposed borderline. And so far as is possible, the brain wants to turn every mental act into a habit. It wants to be as "unconscious" as possible - as that is the only way for thought to be efficient.

    But then, by definition, we need to "think through" mental acts that are to do with the novel, the dangerous, the significant. That is why we have a prefrontal cortex. That is why we have selective attention and working memory. And that is the kind of thought we think of as actually consciously thinking. We feel we can claim "I" was there as it happened.

    Yet this "I" in turn is a habit of social self-regulation. Layering complexity on complexity, to introspect on the forming of mental acts - to make them the subject of a further act of self-report - is something we all learn to do because society wants us to be accountable for when our thoughts turn into behaviours. So we invent this notion of the "conscious self", this "I that was there", as part of the machinery of thought.

    It is an intricate ecosystem and the attempt to make sense of it with a simplistic binary - like conscious vs unconscious - is way too crude.
  • Responsibility in random actions and event
    Doesn't justice recognise this by accepting that liability is on a sliding scale?

    The law must reach some binary decision - guilty or innocent. The accidents are varied in their degree of blameworthiness. Hence there will always be borderline cases - decisions that could go either way.

    In practice, maybe the law even errs on the side of letting drivers off?

    Between 2010 and 2014, there were 3,069 crashes with pedestrians in the Twin Cities and its suburbs. 95 were killed. 28 drivers were charged. But many of the deaths weren't even judged worth a traffic ticket.

    http://www.startribune.com/in-crashes-that-kill-pedestrians-the-majority-of-drivers-don-t-face-charges/380345481/

    So the general moral take-home would be that society expects us to be in "reasonable" control of our actions. And we know that "reasonable" is then tough to define as there is always so much more we can do to prevent accidents or slip-ups. So at a social level, some kind of trade-off between the effort required and the potential damage that might be caused, has to be agreed.

    There is a social norm when it comes to a duty of care, whether it be driving your car, doing heart surgery, or carrying a cup of coffee across your living room. Then justice is about making some black and white judgement on an individual instance. It has to be because there also needs to be a specific action that follows. You can't half lock a bad driver up.

    The justice system of course has appeal courts and community service penalties, etc. But the principle would be that there has to be some social-level norm as a generality. And then to particularise this generality - apply it to some individual case - a line has to be drawn across the world. On one side is social responses - the system of penalties or sanctions that can be imposed. On the other is your personal response - your freedom to think about what you just did in anyway you like. You might well have rather a strong response to killing a pedestrian even if it was judged "a complete accident".

    So essentially this is the rather abstract Enlightenment view of humanity in action. Social norms are encoded in laws. They are treated by society as the statement of absolute constraints. Then by the same token, what is not forbidden becomes your personal freedoms. They are also just as absolute. The Enlightenment machinery would also recognise some basic freedoms as rights. This goes further in saying society can't write laws that impinge on these freedoms.

    Being human becomes a highly abstract affair on both levels. Actual humans are taken out of the equation as much as possible so that we become creatures of an abstract system.

    Of course societies don't apply this model with complete rigour. Social networks and community standards mean who you know, what power and status you have, can affect outcomes. People get away with what others will let them get away with.

    Abstract justice systems operate in a real human world. So a kind of meta-judgement needs to be made. Given the trade-off issue - the effort to enforce a strict rule of law vs the cost of that effort - it could be that a society is doing a "reasonable" job in being pragmatically relaxed. Or it could in fact be just socially corrupt.

    You were asking more about the notion of personal responsibility. Getting back to that, my reply is that there is good reason for the judgements of justice to be binary - the Enlightenment model wants to draw a clear line between society's constraints and your freedoms. The norms it encodes in law then do recognise a sliding scale in terms of just how much effort we ought to have to put into regulating our own behaviour.

    But deciding how general laws apply in a particular case is always going to encounter complex borderline instances. And a judgement still has to be made - either in favour of social sanction or individual freedom. However the effort and cost of applying "blind justice" is a meta-consideration for society. Is perfection ever itself a reasonable aim?
  • Philosophical Progress & Other Metaphilosophical Issues
    I was thinking about how the humanities departments justify their existence given the push for STEM funding there. They do seem to have to justify their existence these days.