• Definitions
    It was rather that distinction between the explicit and the ineffable, the said and the shown.Banno

    If your "saying" is based on metaphysical reductionism, then of course it can't speak to the holism that is the greater attainable view. You might be reduced to showing, rather than telling.

    But let's not get bogged down by the usual point that the only way to learn tennis or drive a car is to be shown how to do it - grab a racquet, get behind the wheel, and start understanding the ineffable essence of being a tennis player or car driver.

    There are grades of semiosis. Each is its own "linguistic community" in terms of the system of symbols that underpin it. Some of the major grades underpinning life and mind are genes, neurons, words and numbers. To learn the game of tennis, one must do that in the language understood by your neurons.

    Social concepts like "that is the service line, this is how you score" need to be communicated too. Words are good. Mathematics is better.

    Is the ball half on the line, in or out? Hawkeye can apply an algorithm to give the correct answer and remove any shred of human ineffability. If no-one umpiring is really sure and can't speak the truth, a calculating machine can ... to a millimetre or two. Differences agreed not to make a difference.

    So humans are complex beasts that live a life that spans multiple levels of semiosis. Ideally, they are all aligned in some kind of holistic harmony.

    But some folk never even develop a mathematical level of self. And some folk become so mathematical as to lose sight of life lived at those other integrative levels.

    It all comes down to a productive balancing act again. Arguing about dichotomies like said and shown, explicit and ineffable, is only a useful exercise if the argument eventually reveals the way they are two halves of the same whole.

    Have you yourself got there yet with this particular question?

    Can you tell us what "word" means?Banno

    In the linguistic sense, I think that is one thing Pinker managed to get right in talking about the dichotomy of words and rules.

    That was the answer I gave a few posts back - https://thephilosophyforum.com/discussion/comment/438849

    So words and rules are how we fracture an attempt to express an idea into a set of semantic parts arranged into a syntactical whole.

    A word is whatever constitutes a semantic part within such a structure.

    It's all pretty plastic and flexible. The plasticity is the feature and not the bug. So "cat" could be the semantic component. And cattery, is two words - cat and -ery - combined via a rule.

    The rule is that the general idea (a cat) is constrained by the general idea of a type of purposeful facility. Draw the Venn diagram and form the right logical conclusion. The intersection is now a new more specified or particularised semantic unit - a "cattery". And that can get slotted recursively back into some syntactical adventure. We can speak of this cattery and not that cattery. The cattery that occasionally houses dogs or occasionally is empty.

    The ability to make further semantic distinctions via syntactical constraints is recursively infinite in principle.

    And a word is thus defined as the semantic aspect of what goes on. The novelty or significance that gets meaningfully shaped into being upon coming into interaction with the structuring habits of a rational grammar.

    Of course, "Colorless green ideas sleep furiously." That sounds as though it ought to be carrying some cargo of semantics. It is "perfectly" grammatical. And there are words. But we understand it to be nonsense. The words don't go together in a way that is rationally grammatical. No self~world state of intentionality that we can recognise is being expressed.

    That is, words - as in the single items we might look up in a dictionary - are a very reduced notion of semantics. The idea of speech as a concatenation of individually meaningful signs is another reductionist exaggeration.

    A speech act - whole phases, sentences, even diatribes - can be "words" in the sense of conveying the holism of a complete mind~world intentional stance. A point of view worth distinguishing from the many others that might have been available but are now neatly rendered "unspoken".

    Syntax is the structure that pins down meaning to something that cannot be merely nonsense - some jumble of urgent noises or scribbles in the dust. Then individual words - like cat or cattery - are where this constraint on semantic interpretation hits the point where we become pragmatically indifferent to any remaining uncertainty.

    The boundary of "a word" is defined not by the information it contains - the dictionary approach - but by the information its serves to exclude. The negative space its serves to signal. There is no need to penetrate further to find the word's meaning. It simply marks the moment where digging more would be redundant in terms of fulfilling some particular communicative intention.

    Rabbit is whatever is not not-rabbit. Duck is whatever is not not-duck.

    A duck-rabbit is whatever is not not-duck-rabbit. So an old lady-young damsel is not a duck-rabbit. But oh, they are both Gestalt illustrations of the constraints based approach that perception takes.

    Seems it is semiotics all the way down then.
  • Definitions
    That's not what I had in mind. .... It's more like seeing the duck or the rabbit, and realising that the same drawing gives rise to both.Banno

    So the disproof of the Gestalt argument is ... a Gestalt argument?

    There is no difference that makes a difference in the stimulus as such - from your "physicalist" point of view. But you can create a difference that makes a difference by shifting your state of interpretance - your "mental" point of view.

    A bad example if you want to dispute my position.
  • Definitions
    I was just reading Wittgenstein’s forgotten lesson.Banno

    So isn't this another false dichotomy we have to break through to discover the right dichotomy?

    One can oppose science and the arts as both being forms of life and so set the stage for which counts the higher form, which the lower. Who is our champion, who is the horrid bastard.

    Or you can play the other game of just shrugging your shoulders and saying it is just two different things. Higher or lower? It's all relative. There is no essential difference if everything is a form of life, some kind of internally coherent system of communication. The question of commensurability is irrelevant as the question of incommensurability is also irrelevant.

    I of course take yet another route of saying well we need to discover the complementary kind of dichotomy that brings some proper synthesis to the whole debate.

    With the opposition of the sciences and the humanities, what could this be?

    Pretty obviously it maps to the usual opposed poles of metaphysical being - the realm of the world and the realm of the mind. Or more pragmatically - the semiotic view of the mind~world relation - the sciences are focused on depersonalising our point of view, the humanities have as their own natural counter-goal the object of socially constructing what it means to be "most human". An ideal self.

    So in the "stepping right back from it" pragmatic view - the one that starts with the "form of life" metaphysics that Wittgenstein nicked unattributed - the sciences and the humanities should make a healthy opposition that can be more than the sum of its parts. We use them as inquiries to sharpen our notion of the world and of ourselves - as the two elements in semiotic interaction.

    Now this does conflict with many peoples' notion of humanistic inquiry. The advice there is to find "yourself", or worse yet "express yourself". Really, the advice needs to be "construct yourself". And as we are all socially constructed as "selves" (with a good dash of genetics of course) then we need to be able to talk about the "technology" of that construction. And even the purposes that would guide any such effort.

    That ought to be the fundamental business of the humanities. And what it finds in that direction ought to inform the sciences in their own matching voyage of "discovery" - or rather, its construction of the world as a useful image. A model of reality that has the anchoring point of view of a humanistic centre.

    So I guess I take a rather industrious view of both the humanities and sciences as academic disciplines. :smile:

    The difference isn't about science merely analysing reality while the arts are about properly living it, being in it, feeling it, discovering it as some deeper level or experiencing it on some higher plane. All the culture wars rhetoric of which stands above the other, or is the proper ground to the other - whatever it takes to be the primary, making the other secondary.

    Instead, a pragmatic/semiotic view - a form of life view - would argue that both "the world" and "the self" are the two halves of a joint construction. And progress lies in constructing the better total model. They are not separate exercises. The problems of modern life lie in the way they got disconnected pretty fast after a moment of unity in the Enlightenment. Scientism and Romanticism began the business of "othering" each other in an unhelpful way.

    Fetishising either the self or the world is the mistake. We need to be consciously engaged in a co-construction of these aspects of being alive and mindful. [Insert all the usual utopian visions of that here.]
  • Is space/vacuum a substance?
    A graph of the reciprocal function might help give a better visual representation of my argument about the start of time.

    function-reciprocal.svg

    So note that the reciprocal function describes a hyperbola. And we can understand this as representing the complementary quantum axes that define uncertainty (indeterminism, vagueness). Let's call the x axis momentum, the y axis location. In the formalism, the two values are reciprocal. Greater certainty about one direction increases the uncertainty about the other. The two aspects of reality are tied by this reciprocal balancing act.

    Now think of this parabola as representing the Universe in time - its evolution from a Planck scale beginning where its location and momentum values are "the same size". In an exact balance at their "smallest scale". The point on the graph where y = 1; x = 1.

    Note that this is a value of unit 1. That is where things crisply start. It is not 0 - the origin point.

    Now if you follow the evolution of the parabola along its two arms you can seen in the infinite future, the division between momentum and location becomes effectively complete. The curves are asymptotic so eventually kiss the x and y axes. They seem to become the x and y axes after infinite time.

    And then the catch. If you are an observer seeing this world way down the line where you believe the x and y axis describe the situation, then retrospectively you will project the x and y axis back to the point where they meet at the origin.

    Hey presto, you just invented the problem of how something came from nothing, how there must be a first moment, first cause, because everything has to have started counting its way up from that common origin point marked on the graph.

    A backwards projection of two orthogonal lines fails to read that it is really tracing a single reciprocally connected curve and is thus bamboozled into seeing a point beyond as where things have to get going from. It becomes the perennial problem for the metaphysics of creation.

    But if you instead take the alternative view - the reciprocal view that is as old as Anaximander - then the beginning is the beginning of a counterfactual definiteness. And that takes two to tango. Both the action and its context - as the primal, unit 1, fluctuation - are there together as the "smallest possible" start point.

    Where y = 1; x = 1 is the spot that there is both no difference, and yet infinitesimally a difference, in a distinction between location and momentum, or spacetime extent and energy density content. It is the cusp of being. And a full division of being - a complete breaking of the symmety - is what follows.

    Looking back from the infinite future, the starting point might now look like y = 0; x = 0. An impossible place to begin things. But there you go. It is just that you can't see the curve that is the real metaphysical story.

    That kind of absolute space and time - the one where the x and y axes are believed to be represent the actual Cartesian reality in which the Universe is embedded - is just a projection of an assumption. An illusion - even if a usefully simple model if you want to do Euclidean geometry or Newtonian mechanics.

    The Cosmos itself isn't embedded in any such grid. Instead it is the curve that - by the end of its development - has fully realised its potential for being asymptotically orthogonal. So close to expressing a state of Cartesian gridness, Euclidean flatness, Newtonian absoluteness, that the difference doesn't make a damn.

    It gets classically divided at the end. But it starts as a perfect quantum yo-yo balance that is already in play from the point of view of that (mistaken) classical view of two axes which must meet at the big fat zero of an origin where there is just nothing.
  • Definitions
    [Some] hold that there is such a thing as the meaning of a word; and that any worthwhile theory of language must set out, preferably in an algorithmic fashion, how that meaning is to be determined.Banno

    [Others] will go along with quine: Success in communication is judged by smoothness of conversation, by frequent predictability of verbal and nonverbal reactions, and by coherence and plausibility of native testimony.Banno

    These don't have to be two incompatible views. They could be two extremes of a continuum.

    The general algorithm is a logical division of things into figure and ground, signal and noise, information and entropy.

    Sometimes differences make a difference. Sometimes differences are a matter of indifference. So the general algorithm is the pragmatic one of how divided do we have to make the world so as to be able to talk about the world usefully?

    Communication is smooth when two speakers are on the same page. They read the world the same way in terms of what is figure, what is ground, what is signal, what is noise.

    Further difference-making is a wasted effort as that is pursuing differences that don't make a difference.

    But equally, the communicative balance breaks down if the speakers discover some remaining vagueness in their language. A lack of bivalent precision - a failure to understand now about differences that do make a difference - becomes something that demands further work.

    So a community of speech (or semiotic interactions) relies on hitting that Goldilocks balance of being neither too vague nor too crisp, neither too indeterminate or too determinate.

    When communication goes smoothly, that only says a productive balance has been achieved. Some pragmatic division of reality into figure and ground - as a shared psychological model of that reality - has been reached and is serving its particular purpose.

    But purposes change. A sharper view may be required. A stricter definition of terms becomes a useful exercise.

    Or maybe the opposite applies. The discussion is too bogged by irrelevant details. Differences that don't make a difference. A greater degree of vagueness about the parts will allow a better focus on the whole.

    Does a cat always have two ears, four legs and whiskers? Generally and yet not always. A smooth conversational balance relies on a remarkably well tuned ear for an appropriate degree of definitional precision.

    So the algorithm involved is a triadic balancing act. It is a system framed by its black and white extremes, then all the shades of gray that emerge as the choices inbetween.

    The world can't be a matter of "every difference making a difference", nor "no difference making any difference". It can't be all signal, or all noise. Not if it is ever going to include a "point of view" worth speaking about.

    Instead speech relies on a world of contrast - that part which we find it worth speaking about, and that part we also speak about by not in fact referring to it. What we leave out of speech acts is just as important when having a conversation.

    Hence the pragmatics of also resisting the idea of giving definitions. Stopping to do that interrupts the smooth flow. The interpretive context of every proposition should be taken as read. To speak about it would be redundant. Or worse yet, it would fail the test of being the part not being spoken about. The part of every speech act that is drawing the line at the pragmatically right place in terms of an appropriate ratio of figure and ground, event and context, signal and noise.

    Speech acts have their negative space as well as their informational content. I somehow feel this isn't well understood in Philosophy of Language discussions. But it should be obvious from the practical psychological basics of cognition.
  • Is space/vacuum a substance?
    Do you not apprehend the necessity of a "being" which applies these constraints?Metaphysician Undercover

    If I visited another planet and found all these ruins and artefacts, I would feel they could only be explained as machinery constructed by a race of intelligent beings. That would be a logical inference.

    But If I visited another planet and found only mountains and rivers, plate tectonics and dissipative flows, then I would conclude something else. An absence of intelligent creators. Only the presence of self organising entropy-driven physical structure.

    This demonstrates very clearly that you do not understand final cause, nor do you understand freewill.Metaphysician Undercover

    I simply don’t accept your own view on them. That’s different.

    However, I think that Peirce had very little to say about either of these, and you are just projecting your misunderstanding of final cause and free will onto Peirce's metaphysics.Metaphysician Undercover

    He emphasised the role of habit instead. Constraints on action that explain both human psychology, hence “freewill”, and cosmology if the lawful regularity of nature is best understood as a habit that develops.

    So it is usually said he was very Aristotelean on finality. But he also wanted to show that any “creating mind”, was part of the world it was making, not sitting on a throne outside it.

    But the fact of the matter is that the existence of artificial things is much more accurately described by the philosophy of final cause and freewill, and naturalism can only attempt to make itself consistent with final cause by misrepresenting final cause.Metaphysician Undercover

    So we agree there for quite different reasons. :grin:

    Furthermore, I never described any "collection of instants", nor did Newton rely on any such conception.Metaphysician Undercover

    OK I accept Newton’s arguments were more complex. He had the usual wrestle over whether reality was at base continuous or discrete. Were his infinitesimals/fluxions always still a duration of did they achieve the limit and become points on a line?

    But his insistence on time as an external absolute was how he could also insist that all the Universe shared the same instant. Simultaneity.

    And note that the argument I’m making seeks to resolve the continuous-discrete debate via the logic of vagueness. Neither is seen as basic. Instead both are opposing limits on possibility. And this is the relativistic view. Continuity and discreteness are never completely separated in nature. But a relative degree of separation is what can develop. You can arrive at a classical state that looks Newtonian. Time as (almost) a continuous duration while also being (almost) infinitely divisible into its instants.

    That is the issue which modern physics faces, it does not respect the substantial difference between past and future.Metaphysician Undercover

    That is where incorporating a thermodynamic arrow of time into physics makes a difference. It breaks that symmetry which comes from treating time as a number line-like dimension - a series of points that you could equally read backwards or forwards.

    Once time is understood in terms of a thermal slope, an entropic finality, then the past becomes different from the future.

    What has happened is the past as it now constrains what is possible as the future. Once a ball rolls halfway down the slope, that is half of what it could do - or even had to do, given its finality. It’s further potential for action is limited by what is already done.
  • Mind Has No Mass, Physicalism Is False
    Not trying to be funnycsalisbury
    Success!

    The above is my best imaginative attempt at understanding what it's like to have reached that point.csalisbury
    Fail!

    Not funny or mocking, but certainly intended to provoke.csalisbury
    Troll!

    t's not funny to me; it's scary. It's like a spider in a hole.csalisbury
    Hyperbole!

    It seems to me you have a loud and firm internal (inescapable?) voice that quickly stifles anything approaching surprise...csalisbury

    Alternatively I have put in the work and know what I'm talking about. And I am up for well-crafted counter-arguments. Are you up to providing them though?

    Your parody only illustrates your own confusion about anything I have said. And I've said it all so extremely simply for your benefit too. :wink:
  • Mind Has No Mass, Physicalism Is False
    If you want to be funny, it has to achieve the kind of "surprisal" I was just speaking about.

    You have to start the listener on one path and then reframe things in a way that shows it can be understood in quite another. The aha! of a rapid reorientation tickles the pleasure spot of the brain.

    Of course if you want to mock, that's a different exercise. Similar, but you want to produce an aha! realisation that connects to fear and anxiety instead. Your desire is to enforce your social norm.

    Ask Banno for tips. He has mocking down to an art. As evidenced a couple of posts back.
  • Mind Has No Mass, Physicalism Is False
    Sometimes the brain is online, and sometimes it is not.darthbarracuda

    The brain is always "online" if you are alive. All neurons are firing all the time even in your deepest sleep. They have to as otherwise all the biological structure would fall apart. Functioning holds it together.

    What gets shut down is the integrative coherence of what is going on. An awake state depends on the precise modulation of neural firing rates. It all has to come together like an orchestra playing a tune. Deep sleep is then more like an orchestra disjointedly tuning up for a few hours.

    The weight of the brain is always a wrong measure to discriminate anything useful. The right physical measure - the meaningful one - is entropy dissipation.

    And even a sleeping brain runs pretty hot - just as an orchestra tuning up still makes an energetic racket.

    So an awake brain has to be measured in even more subtle entropic terms - the Bayesian Brain approach that measures its global level of integrative coherence in terms of a free energy principle.

    What gets measured here is the brain's ability to resist the world's surprises. While sleeping, we have limited awareness and thus a limited ability to predict the events of the world. While awake, that is what the brain is doing. Trying to out-predict reality. And then having to stop and learn - attend and think - when the predictions fail.
  • Mind Has No Mass, Physicalism Is False
    Lies to children.Banno

    One way to spend your Saturdays.
  • Mind Has No Mass, Physicalism Is False
    Between a dead me and an alive me there's something missing which doesn't have mass.TheMadFool

    Sure. What goes missing is entropy dissipation at the organismic level. You no longer turn any food shoved in your mouth into useful work.

    But leave your dead body a few hours. It can become a feast for other hungry "minds".

    Weigh a well-rotted corpse, along with its oozing and vapourous losses due to decomposition. The total biomass might well be more for a time before it leaches away into the ground. All that extra "mind" might actually add mass.
  • Mind Has No Mass, Physicalism Is False
    There would be a corresponding change in the mass between a living brain, which itself includes electrical currentBanno

    Electrical current flowing in the brain? And the electrons are firing along at relativistic speed while we are still awake and alive?

    :rofl:
  • Is space/vacuum a substance?
    The problem here is that you do not account for the acting free will, final cause. It does not act according to these constraints, the determining context. It acts according to what is desired for the future. Yes it is constrained, but the primary objective is to bring about what is desired, regardless of constraints.Metaphysician Undercover

    That is only a problem from your theistic presumptions. It is the basic inconsistency in theism or idealism that my version of physicalism resolves.

    Finality is not about "free will". It is about the inescapability of the emergence of natural law - global habits of regularity that arise directly from nature's efforts to instead attempt to head locally in every direction at once.

    You don't understand Peirce's metaphysics yet. But this is the guts of it.

    "Potential" is a human conception which is perspective dependent. An apple hanging in the tree has potential energy due to the force of gravity.Metaphysician Undercover

    Citing Newtonian mechanics here is odd given that it is indeed a highly technical and reductionist perspective on whatever "potential" might mean.

    Well I guess you need to match your theism with its "other" of scientism to avoid talking about physics in the holistic way I am doing. But clearly I don't accept your attempt to limit the concept of "potential" so strictly.

    Theories about entropy and heat death, only describe potential from the human perspective, the human capacity to harness energy.Metaphysician Undercover

    An engineer might have that human concern. A cosmologist is more interested in how that technical language speaks to thermal gradients. It is not about a potential to do work (serve human finality). It is about a potential to roll down a "second law" entropic slope (and thus serve cosmic finality).

    But at the first moment in time there is necessarily no past. Can you apprehend this?Metaphysician Undercover

    It is you who think in atomistic moments to be strung like beads on a chain. So this is why you end up with the problem of either having to have a first moment, or an infinity of moments.

    My view is about effective scale. So at the beginning everything is the same "size" and so indistinct or vague. By the end scale is as polarised as it can get. The small is as small as possible, and the large as large as possible.

    In the Heat Death, the visible universe has reached its maximum extent due to the inherent limits of its holographic event horizons - technical jargon for the distance any light ray can reach before the ground under it is moving so fast that effectively it winds up standing still ... as is the case when you fall into a Black Hole.

    And it has also reached its minimum average energy density as every location within that spread of spacetime now has a temperature of 0 degrees K and so the only material action is a faint quantum rustle of virtual particles.

    So this is a very different conception of "time" than your Newtonian one. It is not a collection of instants - truncated or endless. It is instead a reality that is truncated at one end by symmetry - an absence of any concrete distinctions. And then truncated at the other by its opposite - a completely broken symmetry where energy density and spacetime are poles apart.

    Everywhere is cold. Everywhere is large. And it is all one great "moment" - a continuity - in that it is a single story of symmetry breaking, a single thermal history of development. It begins and ends for reasons internal to its own structure-creation. There is no "outside" against which its existence can be measured.

    Pragmaticism does not produce good metaphysics.Metaphysician Undercover

    It is the only test of bad metaphysical theories.
  • Definitions
    What matters is both the proximity of what the cat likes and your expression of "dislike" and its force. Tell your cat tomorrow what it did wrong today and you won't accomplish anything.tim wood

    How did you manage to extract that as something I might assert as being otherwise? Do you not think that was an omission of the bleeding obvious? :grimace:

    As it happens, I was having to deal with the whims of my cat - its insistence on sitting on my lap – as I tried to tap these words on the keyboard. So I am well aware of the pragmatics of these things.

    Even forceful speech is no use. Physical propulsion is what is required. :grin:
  • Neil Armstrong's Memory Of The Moon And Physicalism
    One could say that a memory of a place is not the same as physically being at that place...TheMadFool

    Case closed then surely?

    ....but the question is what's the difference between being physically at a place and a memory of that place? Do the two not fade into each other - there's a continuity there, right?TheMadFool

    Well obviously no. There is every kind of physically-relevant difference. For a start, for one you need to be wearing a space suit, the other you would want to be in your tourist clothes. And you wouldn't want to mix the two up. While in the comfort of his own home reliving the memory, Mr Armstrong could wear what the hell he liked without making a material difference.

    As @Banno says, a picture of a thing does not "fade" into the actuality of that thing. It stands - for us - as a sign of some experience (or semiotic state of interpretance).

    So yes, the "mind is not physical" in some general way. But calling it immaterial as opposed to material doesn't really work. Cartesians have been banging their heads against that wall for long enough now to show it is a failed approach.

    I'm making my usual too-subtle point that the "mind" is about a modelling relationship that an organism has have with the world to be in that world. The mind thus has to have a physical basis - neural signals take time and energy. But also, by making the cost of that physical basis a constant tax on symbolic thought, the thinking becomes costless and free ... effectively.

    The thinking just has to pay for itself by pragmatically producing the means that underwrite its existence. The organism must have food and water, plus all of the other things that make life possible and worth living.

    So your OP highlights the fact that recalling different scenes looks to have zero physical cost. I am adding the key rider that this is only actually a zeroed common physical cost.

    In the end, that makes a world of difference to this whole mind~matter debate.
  • Refutation of a creatio ex nihilo
    No-thing is no formed thing or no contingent thing or thing that can be defined. This is the void which is the source.EnPassant

    Matter is nothing in the sense that it is only form.EnPassant

    I agree with the trajectory of your argument. But I say it needs to go further.

    Logical analysis does its usual useful trick here of finding the dialectical structure that is at the heart of any thing. Every individuated or actualised thing is a product of its material and formal causes (as hylomorphism tells us).

    So the Universe - as something that is individuated and actualised, a state of substantial being - must itself be the product of the same combo. It must divide into its material and formal causes.

    Creatio ex nihilo doesn't work as a true nothing would be an absence of material and formal cause too.

    Theism or Platonism doesn't work as it might posit a formal cause, but is pretty mute about material cause. There is no workable complementary definition of the two aspects of causality as there would need to be if the Universe is going to be its own natural bootstrapping story - something that can be its own cause ultimately and so provide a model of causal closure.

    Formal and material cause need to be seen dialectically as two aspects of the one world so that individuated substance becomes the emergent product of a closed causal process.

    So how to recast Big Bang cosmology in this light?

    We can think of the essential dialectic as constraints on degrees of freedom. In the beginning, there was a random everythingness. Fluctuation in every direction and so nothing happening in any direction in particular. You wouldn't even have 3D space and its collective thermal direction that is the entropic gradient we call time. There would be an infinity of directions and so no directionality worth speaking of in this ur-state. A perfect symmetry of indeterminism. A blank everythingness that is neither material, nor enformed. Just a pure vagueness or state of potential.

    It doesn't even exist. It is "there" only as the limit of what it would mean to exist - to be substantial.

    Individuated existence - as the Big Bang creation event - would then get going as this Apeiron, this ocean of fluctuations, first gained some degree of form, and hence a matching degree of materiality as that which is dialectical to that form. Or equally, we could put that the other way round as the first degree of materiality that thus was also the first degree of an enformed existence.

    So in terms of standard physics, we are talking about some actualised constraint of an infinite potential towards some meaningful degree of dimensional limitation. If there is the matching thing of an ocean of fluctuations, they are now fenced into a common direction of some number of dimensions. A process of coherent or systematic evolution - a flow that now inexorably leads to its own most simple solution - is now in play.

    In no time at all (as time is what emerges via this self-organisation) the Comos will flash through all lesser balances of material~formal causality - all the looser levels of breaking the fundamental symmetry of the Apeiron - to arrive at the maximally broken one. Our Universe which is the classical physical limit on the radical uncertainty represented by a theory of quantum gravity.

    Krauss's "something from nothing" account is certainly clunky. It reflects the metaphysical prejudices of reductionists and positivists. They believe that only material causes are real. Formal causes are useful fictions that stand outside the physical world they describe.

    But the mathematics of symmetries and symmetry breakings show that formal cause - as constraints on material differences - are fully real. Their structure is as physical as the fluctuations they regulate.

    Quantum theorists are quite happy talking about virtual particles and other extravagances like multiple worlds. Entropy and information are two sides of the same coin. Materiality has pretty much disappeared from our raw account of nature - or at least has been softened to the right degree to allow formal cause to be just as physically real.

    A suitable dialectical balance has been arrived at in the metaphysics of fundamental physics. Well, in fact things may have swung too far towards the formal aspect, if we are honest.

    That is how Krauss does the confusing thing of talking about the Big Bang as a great big quantum fluctuation in a "field" - a field that has no place outside of the spacetime which then emerges from it in its material fashion. The formal aspect of the mechanics - the quantum formalism - is invoked. But it has to act on no-thing as its "field".

    This would have to be corrected by an account that sees both the collapse mechanism, and the probability space being collapsed, as the two halves of the one action. Each has to develop into concrete form as a mutual or synergistic deal.

    Krauss is employing a quantum formalism developed for application to an already 3+1D spatiotemporal world. It speaks to that end state accurately. But what is the quantum formalism that would apply to an infinite dimensional start point - an utterly unformed and unconstrained notion of "indeterminate everythingness"? That is the question to be asking.

    Anyway, the void imagined as an Apeiron is not empty. It is just so full of unformed possibility as to be radically vague. It is as lacking in counterfactual definiteness or individuation as it is possible to be. And that applies equally to its material and formal aspects of being.

    Each of those start at their least, which is why - dialectically - they can then, indeed must, develop towards there most. The "desire" of a perfect symmetry is its own breaking. And the least event will start that process "spontaneously". Once the ball starts to roll it can't stop until it arrives at its simplest position.

    This is the metaphysics encoded in the physics of spontaneous symmetry breaking. It is how material physics accounts for materiality - states of matter that include plasmas and condensates - these days.

    Krauss plays the old school reductionist as he is beating the cultural drum against the theists. Good for him. Preach in the language the masses understand. Be part of that conversation.

    Meanwhile back in the lab, the theorists have learnt to think like dialectical holists when it comes to the issue of substantial being.

    Formal cause is global constraint and material cause is local indeterminism. Each makes the other.

    Constraint shapes indeterminism into determinstic degrees of freedom - actions with directions. And local indeterminism is that hot action awaiting some coherent direction so it can become an actualised flow of events. The flow then builds the constraints that are doing the determining as the system's "emergent" macroproperties.

    Like the turbulence in a stream, vortexes are formed as collective phenomena. Water molecules get sucked into a direction that becomes a self-sustaining rotation because of its critical mass. All random action is being directed the same way.

    In material science, this is why you can get collective states of matter like Bose-Einstein condensates or superconductors. The formal causes conjure up their material actions. And that works as the collective action is also producing those global states of constraint, or enforced coherence. The story is of a local~global, micro~macro, synergistic interaction.

    It is a causally-closed and bootstrapping explanation of holistic interaction. Just the kind of physics we need to conjure a Universe out of a "nothing" - a void - that was also the vaguest "everything". An Apeiron.
  • Neil Armstrong's Memory Of The Moon And Physicalism
    What I'm looking at here is the immaterial side to the mind.TheMadFool

    You mean the symbolic aspect? The mind is an organism’s model of the world. To the degree it can symbolically model space and time situations, it is outside of those situations as a point of view. It can switch freely among different memory-based reconstructions.

    There is a physical time and energy cost involved. The imagination has a standard refresh rate. But that cost is the same for any act of reality modelling. So there is no further physical limitation on the switching of views. The leaps from one point of view to another can be as small or large as one likes. The time and energy cost is there, but for the modelling, it is a built-in constant, not proportional to any actual real world physical effort.
  • Neil Armstrong's Memory Of The Moon And Physicalism
    It takes about half a second to form one mental image and then start replacing it with the next. This is no surprise as neurons conduct their signals at earthbound rates of under 100mph and even way less where the axons are small and uninsulated.

    So the materiality of the mind-brain shows up in the speed at which thoughts and images can be formed, or other things like that the brain uses as much energy as muscle.
  • Definitions
    Not sure they do. I'm not particularly well versed in grammar, but "shh" or "ah" still has a correct place in sentence structure doesn't it? You couldn't put them just anywhere and expected to be understood?Isaac

    They are used outside of any grammatical structure. That was my point.

    What is the future perfect of “shh”? “I will have shh-ed John before he could speak.” That would be using shh as a word to describe an action. So shh is both the action and - if used successfully in a grammatical structure - a symbol of the action. And quite a primitive symbol in being an icon of the action.

    It actually sounds a bit wrong unless a poetic effect or some other pragmatics was intended in “shh-ed”. We would say shushed or some other word that removed the confusion of whether we were suddenly telling our listener to shut up in the middle of a sentence.

    Would words used purely emotively or as behavioural triggers then cease to be words, would they be, by their use, ruled out of 'grammatical speech'?Isaac

    There is a neurological pathway difference when we utter emotion driven words like “fuck” or “bugger”. The limbic part of the cingulate cortex - the emotion processing part of the higher brain which is the social vocalisation area of the mammalian cerebrum, responsible for screeches and cries - produces these kinds of expressive, but stereotyped, noises.

    Grammatical speech is handled by a different set of circuits. So - as we know when we are overtaken by inarticulate rage - the two actually feel like competing forces for control of out vocal cords. We may swear in colourful habitual phrases even. But something different is happening from formulating novel acts of speech.

    This is one of the things about symbolic and grammatical speech acts. Every sentence can be a fresh surprise, even to us. We wait to hear what we say so as to judge the sense of what we now seem to think. It is a live attempt to solve a problem when we seek to put the world into words.

    Swearing at someone is not a creative effort at that same abstracted level. It is using the cingulate’s rather more limited vocal repertoire of some well used vocalisations to bring about some result or other in a social setting. Or just to complain about life in general.

    Is saying "no" in answer to a simple question using a word, but saying "no!" to banno's cat something else?Isaac

    Logical thought is a grammar that is designed to have a yes/no answer. Telling someone no as a social expression is giving them that answer before they even asked the question.

    The cat will certainly understand your angry and warning tone even if you were to growl “yes” as your habit. And if you say “no” sweetly, the cat will struggle to read your intentions.
  • Inherent subjectivity of perception.
    I see it as many people all feeling the same elephant. So they create different jargon, different metaphors. But they are trying to get at the same thing in the end.

    Vygotsky was the one that really had an impact on me and made everything click into place.

    Social constructionism also took off in the 1980s with Rom Harre’s group, working on the social construction of emotions, being important.

    Is social psychology your thing? I confess I’m not up to date on who’s who these days.
  • Definitions
    "Shhh", "Oi", "Hey", "Ah"... They're word's which just 'do something' on a very primitive level.Isaac

    Seems a bit grand to call them words. Is anything much lost by calling them social signs or expressive vocalisations?

    I associate words with being parts of sentences. So they are really about the nested hierarchical nature of true speech acts. Components arranged by rules.

    Your examples are certainly part of the pragmatics of social co-ordination. But they stand outside the grammatical system in which a word is a semantic unit being organised within the constraints of some syntactic rule.

    “Hey” stands alone quite happily as the social context provides sufficient information to allow it interpretation as a sign. But we are doing something else when we are using a grammatical structure of words to convey the interpretative context via semantic symbolism.

    The evidence of directly learnt responses to words opens up that possibility even with words whose meaning is also referential - ie just because a word refers to something, it doesn't mean that's always what it's doing in an expression.Isaac

    Sure. Words are always vocalisations. But vocalisations don’t always need to be words to be part of a social system of coordinating sign. That seems obvious enough from the grunts, hoots and hollers of any social species.

    My claim is only about what makes grammatical speech so special - the power of symbols and rules. That doesn’t rule out every other step along the way to full fledged language. They don’t have to be eliminated from the repertoire. We are still social animals as much as grammatically structured thinkers.
  • Is space/vacuum a substance?
    If we look at reality, as we know it, to find out what distinguishes or separates the determinate from the indeterminate, we see that the past is determinate, and the future indeterminate, with the present separating these two.Metaphysician Undercover

    Or rather that the past is the determining context. The future is created by what then becomes determinate due to the application of these constraints. The present is the "now" where global historical constraints are acting on residual indeterminacy to fix it as some new actualised event. So the present is defined by the actualisation of a local potential via the limitations of global historical context.

    Or as quantum theory puts it, actuality is realised by the collapse of the wavefunction. A local potential and a global context are resolved to produce a result that is "determinate" and so now belonging to the generalised past, while pointing also towards a more specified future.

    Events remove possibilities from the world. And so shape more clearly the possibilities that remain.

    Time thus arises as the macroscale description of this directional flow. Potential becomes increasingly restricted or constrained over time as it realised in particular happenings. The business of change takes on an increasingly determinate character - even if there thus also has to be a residual indeterminancy to give this temporal trajectory something further to be determined by contextual acts of determination.

    If I understand Peirce correctly, he wants to take one step further, and say that the present, which separates the determinate past from the indeterminate future (LEM not applicable), is itself a "vague" division. So at this time, the present, the LNC does not apply. So we have a determinate past, an indeterminate future which can only be predicted through generalizations (LEM not applicable), and a present which violates the LNC.Metaphysician Undercover

    As I point out, you call it a separation. I am talking about it as an interaction.

    The present as an act of local actualisation has to emerge from the interaction of what is past (the development of some global contextual condition) and what is future (the indeterminancy still to be shaped - but not eliminated - by that process of actualisation).

    I wouldn't get too hung up on mapping this directly to the laws of thought. We normally imagine them to be Platonic abstractions that exist outside of physical reality. So they are framed in language that is a-temporal from the get-go. Verbal confusion is only to be expected.

    But vagueness would describe the state of things at the beginning of time because the indeterminism in the system is macro. There is no history of actualisation as yet, and so no determining context in play.

    However by the time you get halfway through the life of the Comos - as we are in the present era - then it has grown so large and cold that it is most of the way to having only a microscale indeterminacy. The potential has been so squeezed that you can only really see it at the quantum level of physical events.

    At the macroscale, the Cosmos is now getting close to the other end of its time - its classically fixed state of maximum possible global determinacy. It has arrived at what Peirce calls generality. (Or continuity, or synechism, etc).

    Don't worry. It all makes sense.

    Suppose we take a many worlds interpretation of quantum physics, does this say that the sea battle both will and will not occur?Metaphysician Undercover

    Yep. But who wants to go with the MWI?

    This is the problem with wave theory. A wave needs a medium, and electromagnetism is understood by wave theory. Denying that there is a medium, and insisting that the activity is "wavelike" doesn't solve the problem.Metaphysician Undercover

    Alternatively, this is pragmatism. Accepting that we can only model reality. And so what matters is that the model works. It can solve our practical problems.

    Appealing to God is not to brush things under the carpet, but to realize the true nature of time, and how the first act must necessarily be an intentional act, final cause.Metaphysician Undercover

    So can you lift the carpet and provide the detail of who is God and how He does these things? What first act did He perform with the Big Bang? What intent we can read into its unfolding symmetry breaking? How much choice did He have over the maths of the situation?

    These would all be good starting points to tell us what is better about your model of existence. Let's see if you can say something that is not either too vague or too general.
  • Inherent subjectivity of perception.
    Yes, this fits very closely with the social philosophers I have been reading, Mead and Parsons certainly.Pantagruel

    Hah, that takes me back. Symbolic interactionism!

    After the standard indoctrination into the psych department cults of behaviourism and cognitivism, at last stuff that started to make sense. I stumbled on to Vygotskian psychology at the same time - his suppressed works only finally getting English publication.

    And then after another decade, Peirce also was dug up from the grave. It became possible to see how he had got to the guts of it first.
  • Inherent subjectivity of perception.
    There are innate mechanisms for processing sense data, which are acting even absent learning.aporiap

    In fact a huge amount of learning must take place for a new-born brain to be able to "process" the world in intelligible fashion.

    This blog posts describes one of the classic experiments showing both that the brain does need to learn its robust perceptual habits, and that forming an embedded model of the self is a large part of what has to be learnt...

    https://blogpsychology.wordpress.com/core-studies/cognitive-psychology/development-of-visually-guided-behaviour/

    Sure there are also simple reflex pathways established by birth. But that isn't really what people mean by "perception". It's not going to produce qualitative states of experience - a running model of a self~world relationship.

    And even these brainstem and midbrain level instinctual reactions involve a learning process. In the womb, a baby is still exposed to touch, taste, smell, sound and even a dim degree of light. There is adaptation going on.

    But in human babies especially, we are born with the cortex - the higher brain - largely unconnected, just a mass of neurons that then grow a thicket of synaptic connections in speculative fashion. At birth, the cortex is still adding neurons at the rate of quarter a million a minute.

    Then as the infant starts to interact with its world - when it gets the opportunity to be a self in opposition to a recalcitrant reality - things go the other way. A jungle of connections gets massively pruned to carve out the "sense data processing" habits of an organised brain. The pathways are created by cutting away the great excess of connectivity.

    EEG recordings of infant brains show this in action. Even showing something as simple as a defraction grating - a grid of black on white lines - will cause many neurons to fire in the newborn visual cortex. Every cells reports it is seeing something, and so no cells are seeing anything in particular. The response is generaisedl no matter how fat or thin the grid of lines happens to be.

    But rapidly, as connections are pared back, the brain response becomes sharply specific. Now only a few cells react to gratings of a certain spacing. The clamour has gone. The brain can discriminate gratings according to the relative thickness or thinness of the lines.

    So a newborn has a basic start, for sure. But it has to then tune in to the world (and the body) in which it finds itself. It has to learn to makes sense of itself in an embodied fashion.

    When you think about it, how could our genes dictate the exact positioning of every neuron let alone its every connection? Genes can only regulate bursts of growth, bursts of pruning. So learning is a part of neurodevelopment. What is innate is then the general propensity to be able to develop a model of the self in the world that underlies the process we call perception.
  • Inherent subjectivity of perception.
    I wonder, at what point does the agreement, "there is a truth," degenerate into the disagreement "this is the truth?"Pantagruel

    The Peircean answer is when it becomes "my truth" rather than "our truth".

    Language binds us as social animals to a collective identity, a communal point of view, a culturally-constructed model of "the self". So "truth" becomes that to which a community of inquirers practising practical reasoning would tend.

    The community of inquiry is broadly defined as any group of individuals involved in a process of empirical or conceptual inquiry into problematic situations. This concept was novel in its emphasis on the social quality and contingency of knowledge formation in the sciences, contrary to the Cartesian model of science, which assumes a fixed, unchanging reality that is objectively knowable by rational observers. The community of inquiry emphasizes that knowledge is necessarily embedded within a social context and, thus, requires intersubjective agreement among those involved in the process of inquiry for legitimacy.

    https://en.wikipedia.org/wiki/Community_of_inquiry

    Pragmatism navigates the middle path between the extremes of relativism and positivism, or idealism and realism.
  • Inherent subjectivity of perception.
    That's an interesting assumption. Nothing more.Banno

    As I have argued, it seems more like basic psychological science. Understanding cognition and "truth making" has been a major human endeavour of the last 100 years.

    So I am curious. What is your actual model of the psychological processes that are in play when we utter propositions that speak of the world? Show by telling how you are not merely making the familiar errors of the naive realist.

    Psychology feels it has it well worked out. What is it that you dispute and why?
  • Inherent subjectivity of perception.
    Mine fits with my understanding, I'm sure yours fits with yours.Pantagruel

    Heh heh. The flat earther is the ultimate naive realist. Their point of view is the most subjective possible. It avoids all revisionist fact.
  • Inherent subjectivity of perception.
    As perception is the recognition of something already learned, then, how to perceive objective information, when subjectivity (its antithesis) lies in perception?Marax

    One point to consider is that perception is possible because it is the brain forming a model of the world with "you" in it.

    The brain is not trying to see the world "as it is" in some actually objective sense. That would be silly and useless.

    Instead, it is learning to construct a point of view in which "you" are being experienced as being an actor in a "world" that makes sense in terms of all that you could do (or not do).

    So in one sense, it is all subjective - the idealist trope. There is only the model in your head. The model that is of "you" in your "world". But it is also all objective. There is actually the world. And now there is also this further objective fact of a creature with a brain and a set of intentions, running about doing material things in causally effective fashion. The self may be an idea - the construction of a viewpoint - but there are physicalist consequences that flow from the reality modelling.

    This is the view of perception that is now standard in the enactive or embodied approach to cognition.

    An act of perception has to make the two things of the world that is being seen and the self that is the anchoring locus of that seeing.

    The fact that both are a dynamical construction - two aspects of the one co-construction - can be demonstrated by what happens to you in a sensory deprivation chamber. A lack of feedback from the world also results in a depersonalisation of the self. Our physical boundaries disappear when we no longer feel the world in resistance to our actions.

    So perception is the act that leads to a stable seeming world and a stable seeming self as the two sides of the same process of constructing a "meaningful point of view". A model of the world with us as its centre.

    And that is why in perception the useful "information" is not objective. It is really a semiotic system of sign. The sensory system is set up from the get-go to deliver only the differences that make a difference.

    The brain has a model of the world in terms of what it has learnt to expect. And so it is then primed only to react to the physical events - the patterns of sensory energy - that could count as unpredicted and surprising.

    So this is another reversal on the usual view that the brain is a computer that must crunch its input - that objective physical information.

    In fact it processes the world the other way round. It has already decided how the world should be if nothing surprising or meaningfully different happens. And from that very self-centred perspective, anything actually novel or unexpected must leap out as the aspect of reality to quickly analyse and assimilate as best as possible to the running model of the self~world relationship.

    So the self predicts the world. The world is perceived ahead of time at the level of a perceptual habit or expectation. And then the act of perception is completed by a discovery of what failed to be foreseen and now has to be assimilated to re-stabilise that sense of being in the world as a causally-effective agent.

    Perception is more about filtering out the actual world - as that blooming, buzzing, indeterminate confusion - so as to construct a perfectly self-centred point of view where there is a world that makes complete sense in terms of our intentionality or agency.

    Perception is as much a business of making an intelligible self, as making an intelligible world, in short.
  • Inherent subjectivity of perception.
    If you are standing at the antipodes of the globe, and the ball is dropped, then what is it's direction, relative to you?Pantagruel

    Zing!
  • Definitions
    In fact, I could prove to you with fMRI, that Pavlovian response triggers, even if they're words, pass neither through the ventral pathway of object recognition, nor through the areas of the cerebral cortex where we might expect with some concept recognition, but rather straight to the sensorimotor systems to get you to duck.Isaac

    Interesting example. What does it highlight then? For me, it demonstrates the developmental trajectory from iconic to indexical to fully symbolic levels of language. And how this becomes so as novelty (which would demand the whole brain being applied) becomes reduced to the simplest habit (where the brain simply emits a response without conscious deliberation).

    You have to wonder where “duck” became a word that could mean get your head out of the way fast? I would guess it arose iconically. The image I have is of the way a duck bobs its head. So there would have to have been some process of habituating that image within a language community - distilling it down to a learnt motor pattern where not stopping to consider the imagistic analogy was a major part of the deal.

    Shouting “magpie” might be a more meaningful command where I live. They have a habit of actually going for heads.

    But anyway, a key thing about symbols is in fact their lack of direct representation of anything they might represent. We call a duck a duck rather than a quack quack. The four letters and the sound they make could be the symbol for any habit of thought or behaviour. And that is precisely why they are so meaningful once we associate them with just the one (general) habit of interpretation. If some word noise is intrinsically meaningless, then that makes our employment of it the most purely symbolic. It is rid of the iconicity or indexicality it might otherwise have.

    So my point seems to be that we have a process of refinement going on. And the different views on what language is can arise from focusing on either pole of its developmental trajectory. Both sides can feel right as there is evidence for opposing views depending on whether one focuses on the early iconic and imagistic stages, or the late symbolic and unthinkingly habitual stages.
  • Evolution of Logic
    It’s been 30 years since I was digging into that particular literature so I am sketchy on the details. And I chucked out all the papers long ago.

    This is an example of the kind of thing though -https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4206216/#!po=6.09756

    The point was not that great apes couldn’t master a first step of reasoning - the equivalent of a disjunctive syllogism where the ape could tell that if one food reward cup was empty, then the treat was hidden in the other. It was that once you started adding one such rule on top of another, performance fell off fast. It was too much working memory load to keep more than one contradiction in mind.
  • Evolution of Logic
    Is fundamental logic instinctual to organic cognition as a function for processing certain types of spontaneous causality?Enrique

    Experiments have been done to test apes for a capacity to learn simple logic rules. The evidence is they struggle to master more than a step or two of reasoning depth even with training.

    This is what we would expect if logic basically piggy backs on the human capacity for language. We have the neurology for syntactic structure - the recursive grammar trick. We can stack up the if/then steps in our working memories.

    Just speaking is proto-logical in forming our thoughts as grammatically structured causal tales of who did what to whom - the canonical subject-verb-object pattern that organises all language (if not necessarily in that order).

    And if speech acts can be true, then they can be false. Think of the social advantages that came with the invention of lying. A lot of the elements of logic as an explicit reasoning discipline are there once we have speech.

    But logic itself is then a culturally developed habit. Anthropological research showed that illiterate Uzbek herdsmen resisted categorising the world in ways that seem “obvious” to any educated modern person - like putting a set of tools such as an axe and hammer separate from the things the tools acted on, like nails and wood. In their experience, those things went together and there was no abstract distinction that made sense.

    Anyway, this has been a topic of a fair amount of research. Human brains are preadapted for logic due to the recursive or nested structure of grammar. And then actual logic has developed as a useful cultural habit of thought. It becomes embedded through the standard modern childhood. (With various degrees of success perhaps.)
  • Is space/vacuum a substance?
    Probability is not consistent with the three laws, when maintained as three, because identity of an object gives us determinateness.Metaphysician Undercover

    The conclusion I draw is that yes, we can't presume complete determinism. But nor do we then need to lapse into complete indeterminism.

    Pragmatisim is the middle path of constructing a theory of logic in which indeterminism is what gets constrained.

    As an ontology, that says reality is foundationally indeterminate, and yet emergently determinate. And the determinate aspect is not merely something passively existent (as often is taken to be the case with emergence - ie: supervenient or epiphenomenal). It is an active regulatory power. The power of emergent habit. The power of formal and final cause to really shape indeterminate potential into an actualised reality.

    So it is a logical system large enough to speak of the world we find ourselves in - complete with its indeterminant potentials and determining contraints.

    Further, the author of your referred article, Robert Lane, explains how Peirce allows that the term of predication might be defined in a multitude of ways.Metaphysician Undercover

    Again, I am taking the systems view of ontological reality. So the internalist approach that Peirce takes on this would be the feature, not the bug. I'm still digesting that aspect of Lane's argument, but that was one of the sharp ideas that grabbed me.

    Notice how Robert Lane provides no indication, throughout that article, as to how Peirce shows any respect whatsoever to the law of identity in his discussion of the LNC and LEM.Metaphysician Undercover

    There is equivocation here on Peirce's part because his logic of vagueness was a project still in progress.

    His early worked was couched in terms of Firstness - free fluctuations. But as we have discussed, a fluctuation already seems too concrete and individuated. Formal and final cause appear already to be playing a part by that point. A fluctuation has to be a fluctuation in something - or so it would seem.

    This is precisely the obvious hole in the vogue for accounts of the Big Bang as simply a rather large quantum fluctuation. Even if a quantum field is treated as the most abstract thing possible, the field seems to have to pre-date its fluctuation. Verbally at least, we remain trapped in the "prime mover" and "first efficient cause" maze you so enjoy.

    But he was recasting Firstness as Vagueness in later work. And we can see that in his making a triad of the potential, the actual and the general - as the mirror of the three stages of the laws of thought.

    A fluctuation is really a possibility. A spontaneous act, yet one that can be individuated in terms of the context it also reveals. We are nearly there in winding our way back to bootstrapping actuality.

    A step further is "potential" properly understood as a true vagueness. A fluctuation is a spontaneity that is not caused by "the past". It is called for by the finality of its own future - the world it starts to reveal. This is one of the things that smashes the conventional notion of time you prefer to employ.

    But anyway, when it come to the law of identity, it is enough for everyday logic that reality is already reasonably well individuated - at least in the ways that might interest us enough to speak about it. The law of identity can work even if any instance of individuation is merely a case of uncertainty being sufficiently constrained.

    However when we get to ontological questions about the machinery of creation, then this background to the laws of thought become relevant. The details of how things really work can no longer be brushed under the carpet, or shoved in a black box labelled "God".
  • If Brain States are Mental States...
    I don't see the issue. You haven't refuted that brain state language is scientific. You seem to be saying why it's scientific. I'm just claiming it IS scientific.RogueAI

    It's a big deal to me as it was the central issue I was dealing with when I first ventured into mind science as a youth. :grin:

    Artificial intelligence was the first great disappointment. The guys were only talking about machines it turned out. And then brain imaging promised to be the new revolution. Consciousness would be put on the neuroscientific agenda at last as a concord had been agreed with philosophy of mind.

    We would all be starting off in humble fashion by merely identifying the neural correlates of consciousness (NCC) - a dualistic approach where the material explanation in terms of a physical state would be married to the reportable phenomenology produced by a mental state.

    But that great and expensive exercise produced remarkably little directly. It just brought home how muddled people were in their conventional Cartesian division of reality into physical and mental states. All that could result was a doubling down on the underlying dualistic incompatibility of descriptive languages.

    So science did have to question what was "scientific". And for a start, it wasn't speaking of the brain in terms of a machinery with physical states. It had to be some kind of embodied information process - but information processing is another domain of jargon founded on a mechanical "state-based" ontology. Neuroscience couldn't make progress by swapping out a biological mechanism and wheeling in a computational one. That still just left it chasing the phantom of a purely mechanical explanation.

    Long story short, you now have generic models of the "mind~brain" in terms of Bayesian Brain theory and the enactive turn within cognitive psychology, not to mention social constructionism being brought into play to account for the extra features of the human mind~brain system in particular.

    So the science here is a shifting beast.

    Neuroscience was doing a god-awful job in the 1970s as it was basically a branch of medical science and so absolutely wedded to a mechanist ontology. To fix your schizophrenia, the best theory might to be kick your head hard enough that maybe you might repair it like thumping a TV (back when TVs had vacuum tubes and loose connections, so it could work).

    But does modern neuroscience still try to explain the mind - as we humans like to say we experience it - as "talk of neurotransmitters, synaptic gaps, certain chemicals, etc"?

    I would certainly question that. A big picture scientific account would use the jargon appropriate to the whole new level of theorising that has emerged over the past 20 years or so.

    If someone tells you they're in pain (a mental state word, obviously), they've communicated information to you. You know more now than you did before they talked to you. That's meaningful communication.RogueAI

    Exactly. This would be defining "mental vocabulary" in terms of what works in ordinary social and culturally appropriate settings. It is a way of co-ordinating and regulating "other minds" within a shared "mental space" of pragmatically social meaning.

    The problem lies with the extent to which this folk psychology - very useful in the business of existing as a social creature - gets reified as some kind of deep philosophical wisdom. I have "a mind". I can see you have "a mind". Maybe a cockroach has "a mind". Maybe the Comos too? Maybe "mind" is a another substantial property of reality - a soul stuff - like Descartes suggested.

    So what contrasts with the scientific vocabulary? Is it a folk psychology vocabulary? A religious vocabulary? A mystical vocabulary? Where does all this mind talk come from?

    Good anthropological studies show just how culturally specific our own philosophically-embedded mind talk actually is. The Ancient Greeks played a large hand in inventing it as it had a pragmatic use - it gave birth to the cultural notion of a person as a rational individual who could thus play a full part in a rationally-organised society. It was a way to thinking with powerful results in terms of evolving human social structure. A seed was planted that really took off with the Enlightenment and Scientific Revolution (and which, with its flowering, engendered its own counter-revolution of Romanticism and Idealism).

    So mind talk also has its instructive history. It has its pragmatic uses and has continued to evolve to suit the expression of them.

    A greater compatibility between the two sources of language might be a good thing.

    But for me, the irony there is that mind science has to move away from the machine image and become more use to discussing the brain in properly organic terms. While folk psychology also needs to make its shift away from the "dualistic substance" shtick that mostly just ends up aping the errors of an overly-mechanical model of reality.

    Even to oppose the subjective to the objective means you have to buy into the existence of the objective (and vice versa).

    The distinction may be pragmatically useful. People seem to like that sharp separation between the world of machines and the realm of the mind. But the mind~brain question is about whether this distinction is real or merely just our pragmatic social model of the reality.

    Neuroscience has pressed on to deliver answers I am much more comfortable with these days. Dropping talk of "states" is part of that change. Or rather, always framing the word states in quotes to acknowledge the presumptions we have put into play just there.
  • If Brain States are Mental States...
    No, but every chemical state is identical to some physical state. But not the other way around: not every physical state is identical to some chemical state.Pfhorrest

    That's just restating supervenience as a claim. The claim only holds if "states" actually exist in the world rather than in the scientific imagination.

    The language of states - as part of the language of machines - is certainly a pragmatically useful way of looking at reality. If we frame the facts that way, we have an engineering blueprint we can deal with.

    But "states" is a pragmatic construct. And the reality we encounter often doesn't fit that construct so well. The map ain't the territory. And so claims of supervenience must be regarded as having a logical force only within a particular reality-modelling paradigm.
  • If Brain States are Mental States...
    2. Brain state vocabulary is scientific.
    ...
    6. Bob and Sheila can meaningfully communicate about mental states.
    RogueAI

    The flaw in the argument would be the suppressed premise of what kind of communication the second kind is?

    If brain state vocabulary is "scientific", it needs to said what class of vocabulary is instead employed to talk about mental states. Is it merely "unscientific" (a vague contrary claim)? The argument needs to clarify in what way such communication could be meaningful.

    Scientific vocabulary is meaningful in its pragmatic application. If we talk about the world generally as a machine, and thus the brain as a specific kind of mechanism, then the pragmatic effect of this form of language is that - implicitly - we should be able to build this damn thing.

    We are viewing the conscious brain as an example of technology - natural technology - that we can thus hope to replicate once we put what it is and what it does into the appropriate engineering language.

    So "scientific" vocabulary isn't neutral. It has meaning in terms of what it allows us to build. It is all about learning to see reality as a machine (a closed system of material and efficient causes).

    Of course, science is a broad enough church that it doesn't have to reduce absolutely everything to mechanism. And the aim can be also to regulate flows in the world as a substitute to making a machine. Engineering covers that gamut.

    But you see the issue. Brain states language is itself a reflection of a particular reason for describing nature. It aims to extract a blueprint of a machine.

    Then where does mental state vocabulary fit in to the picture? In what sense is it meaningful to someone or some community of thinkers? What is the larger goal in play?

    To be commensurate, the two linguistic communities would have to share the same goal. And they are going to be talking at cross-purposes to the degree that they don't. And in both cases, they may be talking meaningfully (ie: pragmatically), but also, they are both just "talking". They are both modelling the noumenal from within their own systems of phenomenology.

    10. Therefore, (1) is false.RogueAI

    The conclusion can't be so definite as "mental state vocabulary" is too ill-defined here. What makes it meaningful?

    [Note that a social constructionist - as a scientist - would have plenty to say about how humans do use "mental state" language as a pragmatic means of regulating their (social) environment. We talk about our emotions all the time - love, jealousy, boredom, happiness. But are these "feelings" or "culturally meaningful rationalisations"? Even a phenomenologist would examine "feelings of love" and find a whole lot of unreferenced physiological responses that seem fairly aligned with a counter view of the brain and body as "a machine".]
  • Definitions
    The whole "meaning is use" shtick is not wrong, but clearly also not the final word on language as a semiotic phenomenon. We want a properly general theory that covers "everything" in a nicely totalising fashion.

    Such a general theory is that language is about placing limits or constraints on uncertainty. And precision - a "definitional" strength usage - boils down to a sign (a verbal construction) dividing the world with a logical force. The sign has to split the thing in question into what it is, in terms of what it is not. If the goal is precision, this is arrived at via the logical machinery of a dichotomy.

    So this gets at the pragmatics of what language does, why it has such extraordinary power, but also why it is at root a vague business.

    The meaning of any locution is a game. The words could be taken to mean "anything". But what they mean this time is how they function to divide the uncertain world into some binary Gestalt opposition of figure and ground, event and context.

    If there is pointing going on - and in some sense there always is - it is a pointing to some relatively defined thing, but a pointing that involves also pointing away from its "other", the holistic context needed to construct the thing as "that thing".

    So a locution is relative in its logical claim. It is only precise to the degree that it precisifies the "other" - the negative space, the context - that must also be "spoken of" in a definite fashion.

    The fact that language is often not used with that level of precision is a reason why meanings or definitions - the "right habits" of interpretation - feel unstably communicated.

    And there is then the deeper ontological point that the world itself is uncertain or probabilistic. It resists accurate definition because it is not some naive realist "state of affairs" or "set of concrete particulars". It actually is vague or unstable. And language - aiming at crisp bivalence - is simply cutting it to fit its Procrustean bed.

    Offering a definition, at least in part, is informing people of how a word is intended to be used.bert1

    My argument is that language - as a semiotic tool - has this natural goal. It wants to do the powerful thing of regulating nature. And power is maximised by binary precision - the logic of the dialectic. Being able to present a "precise definition" is thus a demonstration of one's mastery of language as just such a tool.

    But we should also remember that we can only point towards something if we are simultaneously seen to be pointing away from something - its dichotomous "other". That is the act that reduces the most uncertainty or entropy, creates the most meaning or information.

    And we should remember that the world we speak about is not itself so crisply divided into a clutter of parts. We do violence if we cut across the actual holism of the world which - pansemiotically - can be regarded as itself a system of sign. A "conversation" nature is having with itself so as to impose a relatively bivalent state of organisation on its own fundamental uncertainty.

    The semiotic view of language is thus a truly general explanation of language as a phenomenon. It is how the Comos itself organises in principle.

    Anyway, summing up, language is a technology of reality stabilisation. For us humans, words allow us to co-ordinate our Umwelts - share our points of view.

    There is always irreducible uncertainty in every stab at creating such a state with words. And yet the dialectical logic - the way words can act as a binary switch - means it is possible to aim as high as we like in asserting some precise state of affairs. While also, the pragmatism of language - the fact that we are social beings operating at many levels of "world-making" - is where the "language games" shtick comes in. Our practical purposes may be quite low brow when shooting the shit with mates.

    Language gives us the means to aim as high as we like. But by dichotomistic definition, the same means can be used to be as vague and ambiguous as one wishes. On the surface, the two can pretend to look the same thing. :wink:
  • Definitions
    What's your problem in answering this question exactly?

    Does one say “not everything” to mean “almost nothing”? Or to mean “well, there are exceptions”?

    No need to get so huffy. Just tell us what your words mean. Point to the right answer. :grin:
  • Definitions
    Do you have a point?Banno

    Clearly. Will you pretend not to see it? Undoubtedly.
  • Definitions
    If...Banno

    Does one say “not everything” to mean “almost nothing”. Or to mean “well, there are exceptions”. A simple exercise in the logic of quantifiers one might have thought. Apparently not.