Comments

  • Are proper names countable?
    There is no ambiguity to this infinity. It's an endless series of moments of speech.TheWillowOfDarkness

    Studiously missing the point as usual.
  • Are proper names countable?
    It would however take an infinite time do indicate exactly which individual one was referring to.andrewk

    Like I said then.
  • Are proper names countable?
    That's not an issue of the speaker because they are the one taking the action.TheWillowOfDarkness

    In what sense would they have ever taken the action? The point here is that predication needs to name the name to make some definite claim. Monkey with infinity and you are dealing with ambiguity.
  • Are proper names countable?
    I missed out a few key words. I meant that individual names would have infinite length and so you would have to wait an infinite time to discover whether the reference is to Jim...............my or his brother, Jim............mi.
  • Are proper names countable?
    An infinite number would be names of infinite length and thus require an infinite time to be actually said. So there’s that problem.
  • Epistemic Failure
    It's not a failure if what you require - semiotically - is a machinery of infinite potential reference coupled to constraint of semantic indifference.

    So epistemology wants these two complementary things.

    It wants a syntax that can generate endless variety. An alphabet of 26 letters can be used to generate every possible word and sentence. Four DNA bases can generate every possible protein molecule.

    Then that unlimited openness gets coupled to the thing that then epistemically closes it. There is the other thing of a semantics - a purpose to be served, a reason to care, a distinction that is actually worth marking, a difference that in fact makes a difference.

    So any sentence or protein could be produced. For a natural system, that is a huge freedom. And that semiotic possibility in turn allows a system of limitation which decides some statements or molecules are noise, or junk, while others are signal, or have intentional value.

    Epistemic closure thus becomes a reasonable choice. Rather than worrying that the world is "some totality of facts", facts become distinctions or individuations that could materially matter. Facts aren't definite in themselves in some realist fashion. They are simply what is "true" - or worth us knowing - to the degree that we have some reason to care.

    Facts thus are always intrinsically self-interested. While also being "about the world".

    It is this double-headed nature that often confuses. Realism vs idealism tries to make facticity all a thing of the world, or all a thing of the mind. But semiotics shows that "facts" are the signs by which we relate to the world. Indeed, epistemically, it is the relation that creates the self and its world.

    But anyway, the way it works is that syntax gives you your referential openness and semantics gives you your referential boundedness. Together, they compose an epistemic system.
  • Ontology: Possession and Expression
    The idea I believe is that the modes express substance and that substance is nothing other than its expression in the modes.StreetlightX

    This points the conversation in the right direction - towards an immanent, emergent, process view - but it doesn't really deal with the ontological issue of how modes get expressed and individuation becomes a substantial fact.

    One has to continue on all the way to a contextual or constraints-based view of causality, where it is the limitation on possibility that is the story of individuation. The essence of something is defined by the information that limits the variety of its fluctuations. And so then any individual thing is a hylomorphic mix of that limited state and then the further fluctuations which haven't been suppressed because they don't matter, given the general purpose or form in play.

    So the Aristotelian four causes/hylomorphic view of substantial being or individuation was pretty much correct after all. His logical talk picks out a hierarchy of predication. It sounds like he is talking about substances with properties. And indeed, that logical atomism is a pretty handy model of reality if you just want the quick and dirty reductionist approach. It does the job when you are living in a Cosmos that is mostly large and cold, and being is in fact in its most highly developed or concrete state.

    But behind that post-substance model of being - objects with properties - is the pre-substantial or developmental story. And here it is a case of top-down constraints in interaction with bottom-up degrees of freedom.

    Instead of individuation being some kind of simple expression - a reversion to monistic thinking - it is the complex product of a historical or contextual repression. Possibilities get constrained in some globally general fashion. And then particulars exist as differences that don't make a difference - to the purpose or form of the thing.

    A horse could be white, brown or black. From a species point of view, these are unconstrained genetic possibilities. Melanocytes offer these basic degrees of freedom. They are differences that don't make a difference when it comes to being able to breed. And so a property of being horse comes to include this range of hide colour as a matter of accidental expression. But a horse - as defined by the constraints of a genetic lineage, the information of an evolutionary history - is strictly limited in its likelihood of expressing other identities, like possum, or tiger, or goat.

    So Aristotle is great because two ontological models can be found in his metaphysics. There is the simple reductionist model that is atomism, where being is already substantial and possesses inherent properties. This is the upward construction or composition based ontology that normal predicate logic talk picks out.

    Then there is the other holistic or systems view of ontology that is the four causes/hylomorphic model. Substance develops into crisply individuated being by the imposition of downward constraints that limit the possibilities for the accidental. And essentially that is an open-ended view of being as a limitation of the accidental is not the elimination of the accidental. Indeed, this becomes an ontology where the accidental is also something that is ontically fundamental. As Peirce put it, the synechism or continuity of constraints is matched by the tychism or spontaneity of pure chance.

    A complete ontology of nature thus has two useful models. Reductionism and holism. We can understand reality in mereological terms as an atomistic collection of individuals, or instead as a complex developmental story of individuation.

    Again, the move from possession to expression is too simple a shift in emphasis. It only gets us to the first step of arguing for emergence. And that remains inherently mysterious as it does not explain why individuation should take some general mode.

    If the goal is to account for essence and accident in some fashion more sophisticated than subject and predicate, then you actually need to continue on to the full-blown holism of a constraints-based ontology.
  • Brain Food, Brain Fog
    And have you experienced diet-related “brain fog”?0 thru 9

    The article doesn't mention it but the link is now even stronger with the recent discoveries about the microbiome and gut-brain axis. In short, your diet needs to create a healthy gut bacterial ecosystem.

    There is symbiotic signalling between bacteria and gut. And the gut and brain also talking via the central nervous system. So beyond leaky gut and other mechanisms, this would be another big reason for brain fog. It would also be a reason to think twice about using antibiotics as well as changing your diet.
  • Stating the Truth
    But anyway, after a while the fun stopped . Its an addiction. The world got fuzzier and fuzzier and reduced to what I could make of it philosophically. To the point where major life events would be happening and I'd be only half-there, thinking about how I could analyze this and fit it into my philosophical preoccupations, or weaponize it argumentatively. Its not a good thingcsalisbury

    OK. I'm no therapist. But let's take this as the core complaint you are presenting with. And it is certainly a recognisable condition.

    Some of us are good at these kinds of argumentative skills. They come naturally. But they can put us at a distance from our own lives and societies. And maybe also we need a well defined subject matter to apply them to. Inquiry has to have some point so that it can move towards a definite end. All that energy ought to have a purpose so that it feels well connected to a goal of value.

    But before even worrying about that, an actual therapist would say check your mental health foundations. Maybe this rather intellectualised complaint - feeling troubled by a habit of being argumentative - will disappear as an issue if you first focus on getting the basics sorted.

    What does that mean? It starts with the body and physical condition. Strength training, good diet and sufficient sleep. If you are going to turn a new leaf by constructing a better set of habits, then hit the gym, understand nutrition and don't compromise on shut eye. These are all routines to build.

    There is a lot of new research to confirm this. Modern life is dreadful on these three scores. Your diet, for instance, affects your gut bacteria and that links straight to your brain and moods. Start feeling physically terrific and the other stuff starts to recede as anything to worry about.

    Then of course, after physical health comes the quality of your social relationships. Again, fixing these might require the building of new sets of habits. It might or might not be an issue for you. However again, I think it is true that you want to build from the ground up. If you are going to make a difference in terms of changing habits, fixing the basics could be 80 percent of the answer.

    After that would come what you do with your argumentative skills. Do you lock them away in the cupboard? Do you find them something useful to do?

    I agree. There is the problem of over-analysing life. It has its destructive possibilities. But also, for some of us it is what we are good at. Would we really want to give it away?

    I think the answers here would be highly individual. It would be more something for you to discover. Which is unlike the general therapeutic advice - the promise that you will get big and immediate returns from focusing on developing the routines for a healthy body and satisfying social engagement.

    Its a fucked up logic: I project onto other people the negative self-image I have of myself, I imagine them seeing me like that, so then I feel humiliated, and humiliate myself, then blame them for feeling the way I do.csalisbury

    Here this touches on how we understand the facts of the human condition. My argument is the familiar one from symbolic interactionism and positive psychology.

    You look to be talking about the mask we have to present to society so that we become predictable and interpretable beings within that society. We are actors in a running social drama. And so we must present the self that speaks to some intelligible role. Others read that mask and act according to its "truth". Both sides have to do the work that makes the mediating sign a correct framing of the self~other relation.

    Looking at it in this fashion should create a distanced third person view of what you are doing. You are describing the situation very personally. The mask you employ is a tactic to deal with an unease in social relations but then it traps you in the restricted space of actions it legitimates.

    Everyone struggles with this to some degree or other. My daughter sounds very similar in that she is super-empathetic and socially aware, which then rebounds on her because she judges herself the way everyone else in public ought to be judging her, when in reality most people barely even notice what you are doing except to the degree it might trouble their ability to predict your impact on their personal sphere of concern.

    I think it is striking how much people don't actually notice the elaborate "you" that you mean to project to the world. And if you are in fact operating from a sophisticated self-image, this is a good reason for feeling no one really gets you.

    One example from my own voyage of discovery. When I was 14, I got it into my head not to wear my school jersey because my mum didn't want to splash out on buying the "cool" school blazer. I also thought the rough wool was too scratchy.

    So then the winter term comes on. Ice on the puddles. Cold wind at the bus stop. But I'm still not bending and wearing that jersey. It is not that I've said any of this to anyone, let alone my mother. Outwardly, I am just a hardy kid not feeling the cold. But I've constructed a silent act of rebellion with no graceful exit to it - even if any morning I could have simply pulled on the jersey.

    So I go the whole winter like that. I'm talking an Auckland winter where its mild. A jersey would be enough. But still, I'm the only person in the entire school. Yet no one appears to notice the fact. Not even my group of friends. There is a comment or two - especially because on the worst days I have to actively demonstrate I'm not cold by standing out in the wind at the bus stop, not huddling by the wall with the rest. However it is a fact attracting no interest or concern. It only when the annual class photo has to be taken, and someone has to go borrow a jersey for me so I don't spoil the symmetry of the picture, that it finally gets any kind of official attention among my peers. Along the lines, "now you mention it, that was a bit odd."

    So it is an example of how most stuff washes over people. They are on the lookout for the easy to read signs with everyday meanings. I learnt that on the whole, you remain invisible when you think your weirdness is in plain sight. Most people have no need to analyse other people too closely. As a rule, nothing is ever a big a deal as you are going to think it would have to be.

    In all three scenarios tho I'm shutting off any form of actual emotion connection. I'm bracketting my emotional needs.csalisbury

    The three options would all have reasonably naturalistic explanations.

    1) Being weird and self-deprecating is to choose the social route of submitting. Abase before the group so as to be accepted on that score. And that is just evolutionary biology. Social species use signals of submission to allow them to fit into hierarchical social structures. So while you might see the strategy as some horrible personal mistake, it is also pretty natural in its logic. There is less reason to beat yourself up about it on that score.

    2) Being mocking and cynical is to display a more dominating posture. Again, the natural dynamic of hierarchical organisation demands this polarisation of roles. One must gracefully dominate, the other gracefully submit, so relations run smooth.

    Again it is deeply natural behaviour - logical in a systems sense. It is only with self-conscious humans that we would note ourselves falling into those contrasting roles and so ask the question of which one we truly are.

    3) Withdrawal is also a natural response. It is fairly hardwired and so not some weird choice you made.

    So you are accusing yourself of shutting off from emotional connection as if that were letting your better self down. But I think it is just social reality. We are tied to interacting through a system of social masks. That is just the way the game has to work - semiotically. There has to be some face we present that makes us part of a predictable and interpretable social environment. And then that has to continue the good old games of dominance and submission on which social organisation depends.

    So the conflict is between a romantic cultural ideal of how we should be - honest, true and naked in our interactions with each other - versus the evolutionary reality that we are creatures formed within semiotic systems that demand a natural hierarchical organisation.

    We can kick against this evolutionary determinism. But don't expect to defeat it. It exists for good natural reason.

    Depression, for me, is tightly wrapped up with a broader shame issue.csalisbury

    This is what good therapy could tackle. No doubt your life story could explain why shame would be a central issue. There would talk in your past that framed things for you in this way. And it would take talk to get that out in the open, examine it rationally, start to put in place the habits of counter-talk that you would employ to talk it back down whenever it arises.

    As long as we identify with anything as part of our "self", it is not going to change. If we can "other" it, then we can replace it as a set of framing habits.

    But again, this bit would be very particular to your personal story. And starting with exercise, diet, sleep and relationships is likely to be the most general answer to fixing depression.
  • Why Should People be Entitled to have Children?
    Having a child is imposing on someone.Andrew4Handel

    And so you jump straight into a justification ... based on the impact it would have on the unrestrained freedoms of others within the collective.

    So as I just argued, this is where any talk of rights does start. And as the conversation develops, we would expect some pragmatic balance between the individual and the collective to emerge. That is what it is all about.

    By your reasoning I should be able to kill people until you make and argument that convinces me not to.Andrew4Handel

    Show me how MY reasoning leads to that. :grin:
  • The snow is white on Mars
    It does reflect the logic of nature, which is different to the logic of machines.

    So nature - as quantum mechanics has confirmed in foundational fashion - is fundamentally spontaneous or indertministic. It is not actually deterministic but simply highly constrained in its habits. Circumstances limit the freedoms.

    At least that is my metaphysical argument here. Nature is not a machine. Accident and randomness are part of its inherent reality.

    And ordinary language simply follows suit for the same reason - it works. You need the duality of downward acting constraints and then the local freedoms which give the actions to be actually constrained.

    So the answer is complex because it both reflects nature at the ontological level. That is the logic of its self organising physicalism (despite our mechanical descriptions of its laws). And also, the brain/mind itself operates with this same natural logic. The brain is not a computer, a machine, but a modelling system seeking to impose informational constraints on the world’s entropic degrees of freedom. The brain is trying to make the world predictable by minimising it’s capacity to surprise.

    So language evolved as another level of that regulatory game. It became a collective social medium via which we could impose a regulatory structure on our shared experience and thus minimise the chances of being surprised.

    Snow is white. Understood as a constraint on our expectations, we then feel surprise - even alarm - when we encounter a patch of yellow snow. Likewise, we would be surprised if it melted to a gas rather than drinkable water.

    So ordinary language is set up with an organic or natural logic. The world is always going to be full of surprises. One can’t know everything, especially when nature itself is inherently indeterministic. A stressed beam is going to surely buckle, thin ice is definitely going to break. But exactly how and when is chaotically unpredictable. That is just way the world is.

    Ordinary language builds that fact in. It doesn’t rely on an artificial exactitude. It only has to constrain a state of belief to the degree that something is expected to be more or less the case. Then being wrong is usefully informative, not some disaster. The model of the world can be tweaked by either adding further constraints, or removing existing ones, to improve future performance.

    Snow is white, except where the huskies pee. Snow is frozen water, on earth at least.

    So ambiguity exists in nature and everyday language is functional because it models nature with that ambiguity in mind.

    Logicism is then the application of a purely mechanical notion of causality to the world. It is the language you would speak if the world had the causality of a deterministic machine.

    Of course, a mechanised view of nature has been terrifically useful in recent human history. The idea of absolute constraint is a powerful technological vision to impose on the world. There is a reason why we want to treat it as the “true” metaphysics.

    But in the end, nature is not actually mechanical. That is just a Platonic vision folk have found useful to impose on its inherent ambiguities.
  • Why Should People be Entitled to have Children?
    It is not clear to me why people have a right to have children and where that right would come from and how it would be justified.Andrew4Handel

    The most general “right” ought to be the right not to be constrained except to the degree that it is necessary. So the default position is “why not?”. We wouldn’t seek to impose restrictions on individuals (or individuals upon themselves) until we can supply the good reasons.

    You simply start on the wrong foot in asking why the right to have kids should exist. The first moral or practical question is why would we think to want to remove the open possibility. The burden is on you to make that positive argument.
  • The snow is white on Mars
    But if we had snow of different origins here (and we probably have, but I have no idea of their kinds), the word would be "naturally ambiguous" (like sand) rather than just, er, "philosophically ambiguous"Mariner

    Ordinary language use is ambiguous and thrives on that fact. Formal language is the attempt to remove ambiguity so as to provide the kind of certainty and absoluteness demanded by logic.

    Ambiguity can never actually be removed. It can only be constrained to an arbitrary degree. Definitions always remain “open to interpretation”.

    But formalism does apply generic syntactic constraints or rules to the expression of ideas. Being as opposed as possible to ambiguity is the key one, given the aim is a form of language that speaks about the true, definite, certain and absolute.

    So unambiguous speech is both a game that can never be won and also the goal to which logicism aspires.

    This irritating fact has launched a bazillion forum threads.
  • Physics and Intentionality
    As I have said many times, 'the law of the excluded middle' didn't come into existence with h. sapiens.Wayfarer

    The LEM really only "exists" as part of a system of thought - the three laws of thought, indeed. And even within logic - as Peirce pointed out - the LEM fails to apply to absolute generality. It's "existence" is parasitic on the principle of identity, or the "reality" of individuated particulars.

    If you do share the view that individuation is always contextual - the big theme of Buddhist metaphysics? - then you would likely be keener to stress the socially constructed aspect of the LEM, not its Platonic reality.

    The reason is, that viewing reason as an outcome of biology reduces it to a function of survival - which is the only criterion that "makes sense" from a biologists perspective.Wayfarer

    Well, the fact that "reason" had its genesis in evolutionary functionality doesn't really make it any less of a wonder how it has continued to evolve through human culture.

    One view is that if mechanical reasoning goes to its own evolutionary limit, that will be expressed in the coming Singularity - the triumph of the age of machine intelligence. So when it comes to the rational elegance of the algorithm, be careful of what you wish for. :)

    Of course, my own biosemiotic approach offers the argument against that. Intelligence remains something more organismal. But anyway, I think you are too quick to dismiss biology as "mere machinery" and so that is why you are always looking for something more significant about life.

    What I mean is, the same proposition, idea, formula, or whatever, can be represented in different symbolic systems, and in different media - digital, analog or even semaphore. I can't see anything confused about that.Wayfarer

    Again, I think the problem is in calling it "the one thing" as if it were an individuated object. That is where the conflation lies.

    A proposition is just some arrangement of words - a sequence of scribbles on a page. And a sentence is marked as starting at the capital letter, stopping at the fullstop. So using an understanding of correct punctuation, we can point to "an item" as if it were an intellectual object.

    But that is just pointing at the sign, the marks, the syntax! And what you are interested in is the semantics, the interpretation, the understandings the marks are meant to anchor.

    Which is where I say that is all about the contextual constraint of uncertainty. This is the opposite of a concrete object way of thinking.

    The meaning of a proposition isn't IN the words being used. Our understanding of what is being proposed is produced by the way we restrict our thoughts in some effective and functional fashion. Seen in a certain contextual light, the collection of marks could seem to stand for some state of affairs.

    Uncertainty remains. But what is for sure is how many alternative or contradictory readings we have managed to exclude. Most of the semantic work is about information reduction - how much of the world and its infinite possibilities you can manage to ignore.

    So the written or spoken words are quite concrete and definite objects of the physical world. You can point to them, record them, play them back later, translate them into any other equivalent code.

    But that is just the syntax - the signs formed to anchor your habits of interpretation.

    Interpretance itself is the semantic part of the equation. And it does not exist in the way of a concrete object but as an active state of constraint on uncertainty. It is not a item to be counted one by one. Every sign could have any number of interpretations, depending on what point of view you bring.

    Words tend towards limited interpretations because that is how a common language works. We need to learn the same interpretative habits so we can be largely as "one mind" within our culture. But constraint is what produces individuation. And it is a living pragmatic thing.

    This leads to the information theoretic definition of information as "mutual information", or other measures where it is the number of bits discarded, or possibilities that are suppressed, which creates semantic weight.

    I know a cat is a cat not just because it is cat-like in some Platonic generic sense, but because I am also so sure it isn't anything else in the possible universe. The number of other possibilities I've excluded add to the Bayesian conclusion that it can only be a cat.

    As usual, this is the way the brain actually functions. Attention gets focused on ideas by inhibiting every other competing possibility. And this is easy enough to demonstrate through experiment. Thinking about one thing makes the alternatives less accessible for a while.

    What I'm arguing is that while in each case the representation is physical, the capacity to understand and interpret the meaning of those signs can't be understood in physical terms. What is doing that, what has that capacity, is not itself physical.Wayfarer

    If science can see matter and information as two faces of the same physics, then why can't it understand even interpretation as a physical act?

    We know neurons are doing informational things when they fire. We know they are forming a living model of their world.

    Perhaps what we - at the general cultural level - lack is then a way to picture in our heads how informational modelling winds up "feeling like something".

    And yet the irony there is we are happy to picture little atoms bumping about and thinking we actually understand "material being" when doing that. Any physicists will say, stop right there. We really have no idea why matter should "be like matter". Sure, we have the equations that work to produce a modelled understanding in terms of numbers that will show up on dials. But we are still stuck at the level of the phenomenal - the umwelt of the scientist.

    So my approach is based on accepting that we are only ever going to be modelling - whether talking about matter or mind. The aim becomes to have a coherent physicalist account that is large enough to incorporate both in formal manner.
  • Physics and Intentionality
    Yes, and inescapably so, because we have two orthogonal (non-overlapping) concepts.Dfpolis

    But what does orthogonality itself mean? They are two non-overlapping directions branching from some common origin.

    So that is the secret here. If we track back from both directions - the informational and the material - we arrive at their fundamental hinge point.

    This is what physics is doing in its fundamental Planck-scale way. It is showing the hinge point at which informational constraint and material uncertainty begin their division. We can measure information and entropy as two sides of the one coin.

    As it happens, biophysics is now doing the same thing for life and mind. The physics of the quasi-classical nanoscale - at least in the special circumstance of "a watery world of watery temperature" - shows the same convergence between information and entropy for the chemically dissipative processes that make life possible.

    So this is the unification trick. Finding the scale at which information and entropy are freely inter-convertible. That is what then grounds both their separateness and ability to connect.

    Talking about their orthogonality is one thing. But talking about their connection has been the missing piece of the scientific puzzle. That lack of a physicalist explanation has been the source of the mind/body dilemma.

    But intellect and will do far more than "constrain material dissipation or instability." They have the power to actualize intelligibility and to make one of a number of equally possible alternatives actual while reducing the others to impossibility.Dfpolis

    What is constraint except the actualising of some concrete possibility via the suppression of all other alternatives?

    So intellect and will are just names that you give to the basic principle of informational or semiotic constraint once it has become internalised as some conceptualised selfhood in a highly complex social and biological organism.

    This is just backwards. Thought is temporally and logically prior to its linguistic expression. If this were not so, we would never have the experience of knowing what we mean, but not finding the right words to express itDfpolis

    The old canard. Sure, fully articulated thoughts take time to form. First comes some vague inkling of wanting to begin to express the germ of an idea - a point of view. Then - like all motor acts - the full expression has to take concrete shape by being passed along a hierarchy of increasingly specified motor areas. The motor image has to become fleshed out in all its exact detail - the precise timings of every muscle twitch, the advance warning of how it will even feel as it happens.

    And then when thinking in the privacy of our own heads, we don't actually need to speak out loud. Much of the intellectual work is already done as soon as we have that pre-motor stage of development. The inner voice may mumble - and stopping to listen to it can be key in seeing that what we meant to say was either pretty right or probably a bit wrong. Try again. But also we can skip the overt verbalisation if we are skating along from one general readiness to launch into a sentence to the next. Enough of the work gets done flicking across the starting points.

    So you want to make this a case of either/or. Either thought leads to speech or speech leads to thought.

    The neurobiology of this is in fact always far more complicated and entwined. But at the general level I am addressing the issue, Homo sapiens is all about the evolution of a new grammatical semiotic habit.

    Animals think in a wordless fashion. Then "thought" is utterly transformed in humans by this new trick of narratisation.

    If we only thought in terms of existing language, we would never need to coin new words.Dfpolis

    Huh? The point about constraints are they limit creative freedoms. But creative freedoms still fundamentally exist. The rules set up the game. Making up new rules or rule extensions can be part of that game.

    Semiotics - if you follow the Peircean model - is inherently an open story. It is all about recursion and thus hierarchical development. You can develop as much complexity or intricacy as the situation demands.

    And where does thought not expressed in marks or sounds fit into your theory? I have just shown its priority, but it finds no place in your model.Dfpolis

    In your dreams you have. :)

    Animals and neural nets can generalize by association. Forming associations is not abstracting. Generalization is a kind of unconscious inductionDfpolis

    Get it right. Generalisation is the induction from the particular to the general. For an associative network to achieve that, it has to develop a hierarchical structure.
  • Physics and Intentionality
    There is no reason a unified human person cannot act both intentionally and physically.Dfpolis

    But that is still a dualistic way of expressing it. The scientific question is how to actually model that functional unity ... which is based on some essential distinction between the informational and material aspects of being.

    For what it's worth, I say this has been answered in the life sciences by biosemiotics. Howard Pattee's epistemic cut and Stan Salthe's infodynamics are formal models of how information can constrain material dissipation or instability. We actually have physical theories about the mechanism which produces the functional unity.

    The example I gave in the OP was that of the transmission of a single item of information across different kinds of media - semaphore, morse code, and written text.Wayfarer

    But also, these are just different ways of spelling out some word. So the analysis has to wind up back at the question of how human speech functions as a constraint on conceptual uncertainty.

    Semiotics is about the interpretation of marks. So "information" in the widest sense is about both the interpretation and the marks together - the states of meaning that arise when anchored to some syntactical constraint. A definite physical mark - like a spoken word - is meant, by learnt habit, to constrain the open freedom of thought and experience to some particular state of interpretation.

    The Shannon thing is noting that this is what is going on and then boiling it down to discover the physical limit of syntactical constraint itself. So given that any semantics depends on material marks - meaningfulness couldn't exist except to the degree that possible interpretations are actually limited by something "solid" - Shannon asks what is the smallest possible definite physical mark. And the general answer is a bit.

    That analysis thus zeroes in on the point at which information and matter can physically connect. It arrives at the level of the mediating sign - the bit that stands between the world and its interpretation.

    So it is confused to talk about a "single item of information" being transmitted in different mediums. If these are all just different physical ways of saying the same thing, then it comes back to different ways of "uttering that word". It is thus "uttering words" that is the issue in hand. So how do words stand for ideas? Or rather - rejecting this representationalism - how do words function as signs? As physical marks, that can be intentionally expressed, how do they constrain states of conception to make them just about "some single item"?

    This 'extra ingredient' is itself reason, which is not explained by science, but which science relies on. It is nowadays almost universally assumed that science understands the origin of reason in evolutionary terms but in my view, this trivialises reason by reducing it to biologyWayfarer

    Biology ain't trivial. It is amazing complexity.

    But anyway, reason is explained by the evolution of grammar. The habit of making statements with a causal organisation - a subject/verb/object structure - imposes logical constraint on the forming of states of conception.

    Animals can abstract or generalise. That is what brains are already evolved to do. See the patterns that connect the instances. But with language and its syntactical form - one that embeds a generic cause and effect story of who did what to whom - humans developed a new way to constrain and organise the brain's conceptual abilities. We could learn to construct rational narratives that fit the world into some modelled chain of unfolding events.

    So psychological science can explain the evolution of reason. Animals already generalise. Language constrains that holistic form of conception to a linear or mechanical narrative. Life gets squeezed into chains of words. Eventually that mechanical or reductionist narrative form became completely expressed itself as the new habits of maths and logic. Grammar was generalised or abstracted itself. A neat culmination of a powerful new informational trick.
  • Stating the Truth
    Why did you say that hedonism is an illusion, but then suggest that structuring life in such-and-such manner gives the "right general mix" (presumably for living an enjoyable life)?darthbarracuda

    You can't just expect a "life of pleasure". It is personal growth and social connectedness that is what most folk actually report as rewarding. So right there, that includes meeting personal challenges and making various social sacrifices - the kinds of things you regard as part of the intolerable burden of existence.

    Epicurus et al have made it clear that directing one's efforts at obtaining pleasure is counter-productive. The seed of the pessimistic evaluation is already in this. Happiness is a byproduct of a struggle. Paradoxically we are most happy when we are not thinking about how happy we are.darthbarracuda

    Pfft. Hedonism is wrong minded as I said. You bloody well ought to be disappointed if you aim at it.

    As for byproducts and paradoxes, this is all still just your choice of framing - your resistance to the notion that reality might be in fact complex and not simple.

    Excitement isn't a fearful state of mind. The fight-or-flight response can only work if higher-level thinking is temporarily put on hold. You are not thinking about philosophy when running from a bear. It is fear that fuels the escape.darthbarracuda

    Check out the neurobiology of the sympathetic nervous system some time. Arousal is arousal. Why do you think people pay so much to ride roller coasters or bungee off bridges?

    And try giving a public lecture or doing a TV interview. Or playing a sports match in front of a crowd. You need to be shitting yourself with adrenaline to give a top performance - intellectually as well.

    The research of course shows a U curve of arousal. There is a case of too much as well as too little. But peak performance requires excitement/fear. Step on to the stage and your heart ought to be pounding as if you were running from that bear.

    People fear stupid stuff all the time - for example, I have a fear of miller moths. They are harmless creatures and I rationally understand this, but I nevertheless have an intense fear of them.darthbarracuda

    Have you ever tried to unlearn the reaction? Do you believe people simply can't?

    An organism with lethal thoughts is in a critical condition that jeopardizes its own survival. Fear sweeps in and suffocates the mind (ssshhhh), coaxing it into submission and back into the perimeter of "safe thoughts" where the organism is no longer a threat to itself. The mind is not the master here.darthbarracuda

    The fight/flight reflex is certainly usefully complex. It even includes a freeze mode. Just stopping paralysed can sometimes work as a last resort when an animal risks attack. So the circuitry to switch between modes of response exists.

    But why are we discussing the wild extremes of life threatening moments? How much do they have to do with the everyday routine? Why can't you frame your arguments in the neurobiology of the normal? What is wrong with taking the typical rather than the atypical as the ground of the discussion?

    You are pathologising your philosophy in short. You ought to examine why you have established that as your constant habit.

    This idea of the mind being the way the body enslaves itself features prominently in the work of Metzinger (meh), the horror of Lovecraft and Ligotti and the philosophy of Zapffedarthbarracuda

    Gawd, it must be true then. :roll:
  • Stating the Truth
    So the pendulum swings between painful discomfort to boredom discomfort.darthbarracuda

    Always one to look on the bright side, hey? :grin:

    I do a lot of strenuous and challenging things. If they actually hurt, I tend to stop. Likewise I enjoy the contrast of doing bugger all for extended periods. If that starts to feel uncomfortable, I tend to stop and find something challenging and strenuous.

    So my pendulum swings, as much as I can manage it, away from what I am ceasing to enjoy. Then because I accept that life has to be lived - hedonism is an illusion - the focus would be on structuring my life so that it gives me the right general mix of the two on a habitual basis.

    Fear/anxiety/panic literally suffocates the mind and prevents it from thinking. This is helpful to an organism's survival, such as during fight-or-flight situations where thinking is only going to slow the organism down.darthbarracuda

    Such rubbish neuroscience. What kind of thinking - rationalisation - do animals do? What is the difference between anxiety and excitement exactly? What is the point of confusing the confusion of the unprepared with the clarity of acting on well-developed habit?

    We are quite literally not allowed to think beyond a certain perimeter without anxiety immediately slamming us down and choking the thoughts out of us.darthbarracuda

    The brain is just so much more complicated and well-adapted than that. The response to moments of stress is not automatically a generalised panic attack. You are talking about what might be the eventual result of prolonged stress, not a normal healthy neurobiology as it was designed to function.

    I agree that a balanced lifestyle is recommended. But this also means a balance in terms of thinking. Too much thinking, too much seeing, will either kill or cripple you.darthbarracuda

    Yes. I was advocating a balance when it came to thinking. I think compartmentalisation in that regard - often seen as an unhealthy trait - is a useful trick to learn.

    Start by stopping those negative thoughts. What good does Pessimism actually do you except as a comforting rationalisation for remaining in "a near-perpetual state of controlled anxiety"?
  • Stating the Truth
    I'm just fucking sad man, I'm unhappy, I'm lonely.csalisbury

    I hear that and I'm sorry for it. And I don't expect to cure that with words here. The only insight I am offering is that the relief of that state has to recognise the complex situation we have collectively got ourselves into. I resolve it by compartmentalising life and not expecting to find myself in some kind of unified perfection.

    If your thing makes you happy all the more power.csalisbury

    I would question "happy" as a useful goal. Challenge, thrill, intensity, seem more at the heart of it. But all that against a backdrop of rest and control. We seek discomfort because we are too comfortable and comfort because we are too uncomfortable. As always, I would talk about what we can dynamically balance in practice. The natural goal of the mind is not to arrive at some fixed state but to maintain a state of adaptation in regards to the world.

    Again, that is picking up on the argument that the self is what emerges as contrast to "the world".

    But I have the suspicion that what makes me unhappy is this drive to harmony, even if its a weird syncopated harmony in disharmony. Im bored and tired of my thoughts. I'm especially bored of dialectics. Have you seen 'get out'? I feel like im half-anaesthetized in the 'sunken place' with some weird dialectical sidekick who argues on my behalf, while i lay unconscious and hurt. Sad & mad.csalisbury

    Again, no words will just fix you if they are just more rationalisation. But my view is that the psychology of this is that we are formed by our habits. And habits can be changed just as they can be learned.

    It sounds like you have clinical-strength depression. So as an established habit, this would be a neurobiological depth issue. And the conventional advice would be to start addressing the structure of your life to which it would be a state of adaptation. Positive psychology and other therapies can give you the tools for examining "the world" as you have imagined it, and to which your state would be an "adaptive" response.

    So again, I couldn't possibly diagnose you from a few posts. But the primary symptom we are discussing here is the habit of rationalising - imposing dialectical structure on "the world". There is this other self within you that isn't shutting off when you find all its efforts pretty meaningless.

    So what do you do? Do you stand back from that trained and educated aspect of your own personal history and label it as "not me". Just despise it as a wizened siamese twin. Or do you give it something to do, get it involved in some activities that seems useful and productive in a long-run fashion?

    Maybe you should unlearn the habit? Maybe you should find it useful employment? These do seem the two contrasting ways to go. And both would seem valid.

    What do you really think your situation is? That you can't be fixed or that you resist being fixed? Once you take on the identity of "the broken" then of course you don't actually desire the change that would be a change to that state of habit. And to the degree that you view a life to be perfectable - just happy in some untroubled and thoughtless fashion - you are going to argue that the goal is impossible anyway.

    Habits are learnt by the accumulation of many tiny barely noticed steps. Habits can only be changed by the same thing. So a question is: do you know from experience the skill of changing a habit? Is that where you could use help and techniques.

    Then the other question is what is the best we can expect? I think feeling adapted - properly embedded in a context, but also with sufficient creative freedoms - does it for most people for natural reasons. I think it helped me that I did compartmentalise my selves to a fair extent into their physical, social and rational modes. I pay enough attention to keep all three plates spinning.

    As I am arguing, they can't be "well-integrated" because they are three spheres of being. They each need to be lived by their own lights to a reasonable extent.

    But if the OP is about the particular symptom of an over-powering habit of rationalisation, which seems mired in meaningless rumination, then you do stand at a crossroads from which you need to shift. Either unlearn the habit as it exists for you, or give it something meaningful to do. Those seem the obvious answers.

    So how would your day-to-day be different doing either of those things? What have other people actually done? That seems a useful conversation to have.
  • Stating the Truth
    Can you actually go any deeper, or is what seems to be a deeper layer actually just another illusion?Sapientia

    It is going to be appearances all the way down. But why talk of it as being just a series of illusions? I find it more accurate to see it as also a hierarchical series of selves.

    So there is the everyday biological "me" that sees the colours. I see the same shade of grey because it is useful to make automatic visual compensations that "make sense" of the image as if it were a real set of surfaces of an object placed out in the sun and crossed by shadows. I am at that moment the kind of self who is seeking to understand the world in terms of an intelligible collection of physical objects. So I want to "see through" all accidental features of the occasion - facts about where the sun is and how the shade falls - so as to get right through to the most meaningful state of interpretance, the one where I am acting self-interestedly in a world composed of physical objects.

    But then there can be other "me's" layered on top, adding further "world making". Language and culture produce the social me that reads the environment in terms of all its rules and customs. I relate to that structure - and in relating, become that type of self, that kind of point of view. I see that it is true that I am driving on the correct side of the road because there is a dotted white line to my right. I see I have done something wrong because someone is scowling at me. These are all social facts that are "true for me", and in being so, are constructing the "me" that would hold them as truths.

    Then we can kick it up another level to the scientific me and the truths at that level of being. Again, the facts of a scientific viewpoint are merely a further configuration of appearances. They boil down to numbers on a dial. The colour picker informs me that the pixels at a set of coordinates is RGB( 126,126,126 ). And my scientifically-minded self accepts the objective truth of that.

    So as I replied to @Banno, truth always boils down to a point of view. There is some "us" that informs the relation with the known world. It has its needs and reasons. And then it forms an idea of the shape that facts will take. What it experiences is some "appearance" - or rather less dismissively, an Umwelt.

    An Unwelt is more than mere appearance as it is in fact an image of the world with us in it. There is no dualistic "us" that pre-exists its perceptions or truths. It is in coming to this state of interpretance, this particular habit of sense-making, that also forms the "us" that is the anchor for some definite point of view in regard to "a world".

    This is the properly deflationary route to a theory of truth. Pragmatism results in Unwelts. We emerge as habits of sense making - which is a positive thing as that constructs an "us" that is acting on the world in some concrete fashion. It is not the negative thing of an endless hall of mirrors, a series of levels of illusion with no ultimate "truth of the world".

    Sam Beckett has a quote about a progressively constricting spiral ... Schelling (or maybe Zizek) uses the metaphor of some kind of trap or knot that gets tighter the more you struggle against it.

    It feels more like a very tense and nervous imperative to organize thought into some arrangement of leakproof compartments.
    csalisbury

    I get this. But as I am arguing, I understand the situation to be that we humans are now complex creatures composed of multiple levels of selfhood. We have as a minimum the three levels of being biological creatures with animal needs, social creatures dependent on a co-operative social structure, and rational creatures with a recently-developed interest in living a mechanistic, quantified, technological and mathematically-encoded lifestyle.

    So there are levels of self produced by each of these levels of world-making. And the fit can be a little rough, especially given the accelerating pace of development. Thus we have to work a bit to create any sensible kind of balance when we are all projects in rapid progress, maybe never to be finished.

    Is this where your meta-model of philosophy goes wrong, or goes right?

    My argument is that how we see the world makes us the person who we are. And I grant that I have to be three kinds of person, in effect. So I would argue against your demands that metaphysics, in particular, should be so totalising as to include the kind of selves that are "feeling" or "poetic". Those kinds of selves are more about the cultural and animal creatures that we are. And even then, my complaint is Romanticism over-entangled the cultural and the animal. There is an advantage in being able to compartmentalise a lived life so that we can express our animal, social and rational selves in a more separated fashion.

    The levels of selfhood that need to be constructed to be a complete modern person do have overlap. They do wash into each other. But also, paying attention to keeping them separate, defining their spheres of influence and their appropriate times of expression, can help create a balancing structure.

    The totalising mistake would be to expect some kind of perfect integration of the psyche - the kind that would express itself fully in philosophy by elevating the affective and the poetic to the sphere of the rational. Or as is more the case, attempt to pull the rational back down to "their level".

    It seems healthier to me to be able to compartmentalise to a degree. Balance is being able to switch between broad modes of self - animal, social and rational by turns, depending on the setting. The difficulties would arise when we try to identify as just the one self - the beast, the poet, the thinker - as if we ought be so centred and simple.

    It doesn't have to be easy or perfect. But it is our reality as modern humans. We have unleashed the scientific and technological forces that are constructing a new level of human selfhood and the world that self sees. And that does create a lived polarity, a structural conflict, between the subjective self (that sits nearest the animal pole) and the objective self (that is way over the end of rationality).

    But do we have to feel torn if we can construct the further super-self that sees that this is the game and the combination of selfhoods/world-makings that needs to be balanced within our psychologies?

    The first step would have to be accepting that levels of selfhood is not a bad thing. It is not a failure to be hierarchically organised or stratified in this fashion. We can escape the strangling grip of rationality by making it one of the three things we can do well, in the appropriate context.

    The mistake, in my view, is trying to identify yourself with just one of the three levels on which it has now become natural to live as a modern human. I like the idea of being able to exist as all three kinds of selves in a fairly full sense, while not getting too hung up on achieving a (rational, mechanical) degree of perfection on that score.
  • Stating the Truth
    The question is rhetorical, rhetorical because I do not think it answerable unless "conscious and unconscious phenomena" is restated in some way that makes more sense.tim wood

    Well OK. So here for example I would note that neurocognitive researchers don't actually talk much about conscious and unconscious. They talk about attentional and habitual, or voluntary and automatic.

    They don't find a mentalistic jargon useful. They employ concepts that can be cashed out in terms of neurocognitive mechanisms, or behavioural criteria. So psychology - to the degree it is scientific - does restate "conscious and unconscious phenomena" in ways that make more sense to scientific inquiry.

    I realize there are exceptions, and maybe I'm a half-century out of date, but that's why I asked.tim wood

    The psychology of the 1970s was indeed pretty dismal. Behaviourism had little to offer. Cognitive psychology was too wedded to computationalism. Neuroscience classes were run by the medical school and had little to say about functional architecture.

    But a lot has changed. Evolutionary and social psychology have become big. So too, functional anatomy. Psychology has been put on a decent biological and developmental footing.

    Maybe you are talking about psychology as therapy or something?
  • Stating the Truth
    I'm none the wiser about what you want to say here. My comments are based on pretty basic psychophysics and neurocognitive research. I would presume those would be the parts that work.
  • Did Descartes Do What We Think?
    I have not addressed it, but there are two kinds of knowledge we have been talking about.Dfpolis

    I'm still finding it very unclear what it is that you think you are arguing. But maybe it is this. Maybe you are making the contrast between the roles played by coherence and correspondence in theories of truth.

    So on the one hand, there is the certainty (and doubt) that results from some generalised state of coherent belief. We have a world view that seems to work in reliable fashion. We have a pragmatic set of interpretive habits that do a good enough job of understanding the world. This is what intelligibility feels like. The world is experienced as having a stable rational structure - where dogs are dogs, horses are horses, the house on the corner is still blue like the last time we saw it, and we aren't concerned about the possibility it may have been repainted or knocked down in the last few days.

    Then there is the converse thing of the particular correspondence of a belief to a state of affairs. We are talking now about some individuated fact, which could thus be true or false as a particular thing. Our general knowledge of the world can't tell us that directly. From general knowledge, that giant dog could be a tiny horse. It is a possible fact consistent with a general view. So now we have to go a step further and establish that fact as being one way or the other as a matter of "immediate actual intelligibility".

    So when talking about Descartes, he does seem to be claiming that every fact is merely a particular, and so suffers the challenge of correspondence. But he relies on an evil demon to pursue that line. And that increasingly becomes incoherent with that other aspect of our knowing - the one that relies on a generalised coherence.

    It is not such a stretch to argue that our perceptions could be dreams or hallucinations. You don't even need an evil demon for that to be true (according to the allowable possibilities of generalise coherence) some of the time. But for an evil demon to be universally the case - to the degree it can intrude on our thoughts and make us miscount the number of sides to a square every time we seek to establish that fact as a matter of perceptual correspondence - is a real stretch. It conflicts rather too violently with the rationality we find in knowledge as generalised correspondence.

    And in the end, an evil demon that could so completely deceive us on that level - in a totally generalised way - falls out of the picture. It becomes a difference that makes no difference. Life for us would remain the same despite it being "a grand illusion". The epistemology of generalised coherence would absorb Descartes's evil demon. As you say, Descartes is still left in his chamber, stuck in that reality. Doubt so complete leaves him back where he started.

    But when it comes to the history of ideas, it remains important to see beyond the naive realism of the kind of "unity" of mind and world you appeared to be pushing.

    Descartes and Kant stressed the problem of knowledge correspondence. In psychological terms, the mind only appears to represent the world. The world is merely an image. And that creates a troubling epistemic gap.

    But then Peirce and Pragmatism stressed the generalised coherence of belief. So now we have a triadic or hierarchical epistemology with a long-run temporal structure. As I argued, globalised coherence creates a general certainty about what even counts as actually possible or actually likely. Perception begins with a state of reasonable expectation. And then correspondence fits in as the particular facts that might then come into question.

    Is that house still blue? Well. let's go take another look. We would be surprised if it were not. Although it is quite possible it might have been repainted. Less likely we were simply mistaken in our memory.

    Within a framework of generalised belief, we can then entertain a doubt about any particular fact. But the degree of that doubt is then always pragmatically constrained. We kind of know what needs better checking and what is unlikely to be wrong.

    So knowledge of the world has this intelligible structure - generalised belief that occasions particularised doubting. A Bayesian brain, in other words.
  • Stating the Truth
    You think the whole of psychology is a failed science somehow? A bit sweeping.
  • Stating the Truth
    The colored in world we see around us isn't how the world is, it's how it looks for conscious creatures with visual systems like ours.Marchesk

    Correct. So what better example of a psychological truth could be imagined?
  • Did Descartes Do What We Think?
    Awareness is not judgement. Being aware of both "this" and "horse" is not judging <this is a horse>, but we do so judge, because we confuse the joint awareness of the two different contents with that of a single object that is both this and a horse. For <this is a horse> means that the identical thing that evokes <this> is evoking <horse>.Dfpolis

    If we think we see a horse and not a dog, that is what we see. So while acts of perception do involve a general categorising conception and some particular sensory image, the two are normally experienced as a single act of interpretation. A point of view is what clicks into place.

    So yes, we can dissociate the ideas from the impressions as a further effort of analysis. But the “immediacy of intelligibility” is a result of the perception being a fusion of the bottom-up sensory possibilities and the top-down conceptual constraints. Awareness is the emergent synthesis where the particular impression now stands as an acceptable instance of a general idea.

    Psychology is about relational models. And that requires taking points of view. The sense of self is thus what emerges along with a sense of the world. The intelligibility being imposed on the world is the one that has “me” seeing “it” from some particular perspective.

    This is easy to see with illusions like the Necker cube. The front can become the back, depending on “where” we are placing ourselves in our sense of visual space. Are we a bit above looking down, or a bit below looking up?

    The stimulus remains exactly the same. But we can flip between two general conceptions as our intelligible interpretation of what it is in relation to where we are.

    So knowledge - as some kind of direct experience of the available sense data - is always a mediated judgement. The self that is perceiving and taking some viewpoint is part of what is getting constructed, along with the world understood as a realm of intelligible objects.
  • Stating the Truth
    What makes "The snow is white" true is the snow being white. That's not a justification.Banno

    As usual, the missing words are being white “to us”. Truths are always ultimately psychological facts, not ontic ones, as they require that reality has the further thing of a point of view.
  • Stating the Truth
    You're making all the world a falsifiable chess game.csalisbury

    So pronouncing the truth becomes something like just asserting identity?Baden

    My approach is semiotic. So as Baden notes, I wouldn't be defending naive realism. The self would be "revealed", as much as its world, by the process of inquiry.

    Of course you could then see the self at the centre of my own inquiry as some kind of ideally rational self ... that you don't like ... for reasons of your own. Or of your "own". Ie: whatever ideal self you have in mind as articulating the proper worldview from "your" point of view.

    Be that as it may, my response was simply that the right way to go about things is to "pronounce truth" - as that is then inviting falsification head on. It is saying, come have a go.

    Problems only arise if claims are made in ways that are vague or otherwise unfalsifiable. So I am saying the philosophical inquiry has to take the form of a falsifiable chess game. Pronouncing truth is not in itself some kind of psychological flaw.

    But Baden is right that people have to be wary about the degree to which they are also forming "a self" in coming to some clearly articulated world view.

    (But isn't that a good thing - to actually also become some sort of definite self in life?)
  • Stating the Truth
    So what's going on here? What is happening? Why can't we stop?csalisbury

    Could it be - done right - that it is following the principle that ideas must be stated definitely enough so that they could be found wrong?

    The worst thing of all is a mumbling, opaque, vagueness - an assertion which couldn't even be wrong as it does not put forward a clear enough claim. But if a claim is bold, then it makes itself open to the most direct counter-attack. Which is what you want if the aim of the game is intelligible discourse.
  • Is there anything concrete all science has in common?
    What I am thinking is that science might be just a very diverse range of practises with no underlying metaphysical claim to be found or to unite it.Andrew4Handel

    Why not begin by listing all the things science doesn't do then - like reading goat entrails, or accepting personal proclamations of faith, or wasting too much time on untestable speculation?

    Do you think that a story about metaphysical naturalism and epistemological empiricism won't emerge, exactly as @aporiap says?

    And maybe the concrete uniting feature is that science is generally pragmatic - not so fussed about prescriptive methods as you seem to think it ought to be. There are recognised good habits - well explored in philosophy of science - but also plenty of room for science as a creative art. It is not paradoxical that a method so powerful can also afford to be quite relaxed in many respects. It works so well that it can be sloppy in some regards.

    That is why philosophy of science talks of paradigms and the difficulty shifting them. Science often tends to assimilate evidence to existing wisdom rather than being built up of discoveries, paper by paper.

    So it is clear to anyone that science is generally committed to metaphysical naturalism, and then has a particular affection for atomism within that. It is also generally united by a method of reasoning that involves the cycle of abductive hypothesis, deductive theorising, and inductive confirmation. To deny this, as you do, is just unreasonable.

    I think it is too restrictive to try and reduce it all to physics or the physical or empiricism and neglect the role of the imagination, cognition,chance, invention, intuition, desire, bias, political forces, commercialism and so on.Andrew4Handel

    But science is also pragmatic and so doesn't believe that it needs to stick to some rigid approach. It is pretty flexible within the general limits that define it. There is room enough inside for quite a variety of ways of attempting to achieve progress.
  • Did Descartes Do What We Think?
    I take it from this that you have not read De Anima iii.Dfpolis

    As if there were one reading of it. :grin:

    You know that there are many contrasting readings on what was meant by the intellect and how it was embodied.

    Descartes published his Meditations on First Philosophy in 1641 and died in 1650. He was part of the background out of which the Enlightenment developed.Dfpolis

    Oh please. As if Galileo or Francis Bacon did not yet exist.

    And I take it from this that you have not read what i wrote in my last post.Dfpolis

    Is this going to be your standard response? Anyone who dares to disagree with you must be merely failed scholars.

    You do realize that thoughts are not the same kind of signs as natural and artificial languages?Dfpolis

    But are thoughts things or processes? Are they the syntactical symbols, the mere marks, or the semantic acts of interpretation?

    So yes, semiotics is about recognising there is a difference between the signs - as marks - and their interpretance. If you want to call the semantic part of the story "thoughts", that would be reasonable.

    Or maybe you also want to say that thoughts can take mental images as their signs. And that would also be reasonable to me in the light of my position being that all sense data are signs in a syntactic sense. At some level, we find neurons firing in a fashion that physically marks out the state of a logically organised circuit. Some yes/no question is being answered about the "state of the world".

    Having read De Anima a number of times, i fail to see any evidence of this. Would you care to back this up with specific texts that support your point?Dfpolis

    I made my argument. There is a quite adequate naturalistic explanation of "human intellect". And it is no surprise that Aristotle is remembered as the empirical antidote to Plato's rationalism - a proto-pragmatist - even if that is of course a rough caricature of the story. You can choose to ignore it if you can't muster any telling counterargument.
  • Did Descartes Do What We Think?
    So, I have chosen to define "knowing" to refer to the process Aristotle described in De Anima iii -- a usage with a long tradition of philosophical usage. To wit, to know is to actualize present intelligibility. It is thus an activity of intellect -- of our capacity for awareness of information.Dfpolis

    If that is how you conceive of knowledge, it does not exist. Our actual system of episteme and doxa is always limited -- always open to shocking surprise. Our ability to predict, while real, is limited and uncertain. Failing to see this is a very dangerous form of hubris.Dfpolis

    So is the real debate about the accuracy of Aristotle's epistemology or the unreasonableness of Descartes's?

    I think Aristotle's approach - shorn of scholastic/divine interpretations - boils down quite nicely to a pragmatic and semiotic story. And I certainly don't support Descartes's mentalistic dualism. I just see that he has a place in history as a particular reaction to the simplistic empiricism that characterised the dawning Enlightenment.

    It is not me that would drive a wedge between sense data and rational argument, suggesting that knowledge rests on either the one or the other. My view is that theories and acts of measurement go together as an active and productive habit - an established system of sign. So the kind of doubt about sense data, and even habits of conception, that Descartes was saying were ultimately doubtable, well, in practice, we have no good reason to doubt them. At least until they start to make enough bad predictions.

    A Peircean epistemology stresses that the very nature of habits of conception is that they are "well developed" - the best we can do so far. So Descartes in his chamber could "conceivably" be a Boltzmann brain, making all his practical knowledge some random illusion. But then pragmaticism is about accepting that absolute knowledge is never going to be the case, then moving on. The focus of pragmatism is about what constitutes "well developed" habits of belief or intellect. How closely can we approach some ideal of "absolute knowledge", or "objective totality", or whatever general epistemic goal we have a reasonable freedom to set ourselves.

    So Aristotle offered a fairly systematic and complex view of epistemology. So does Peirce. And Descartes pops up as one of those epistemology 101 guys, along with Hume, Berkeley, etc, who questioned the empiricist tide of their times in some nicely simplistic fashion. They dramatised the "other" that looked to be subsumed in the contemporary discussion. But that kind of antithesis has to find its resolution in a triadic synthesis, not simply left as a disjointed dualism.

    So sure. Take a pop at Descartes. Could he actually doubt "everything" in a reasonable fashion? Or was he simply illustrating the Peircean point. Knowledge develops by beginning from some "leap of faith" - a willingness to take one hypothesis as a plausible truth and then judge that based on its "real world" consequences. The metaphysical starting point then becomes believing there really is a world out there that impinges on us in such a way that we can be its pragmatic modellers.

    I would suggest that underlying this crisis in faith about our knowledge is the cultural shift from a theological perspective to a humanistic worldview. The Scholastics were quite content to acknowledge that human beings are finite creatures, with limited intellects. They did not think that we should know anything exhaustively, as God knows it. Rather, we know only what sense reveals to us. Still, we know what sense reveals to us.Dfpolis

    I don't buy that at all. Pragmatism doesn't just acknowledge our finitude, it goes further in saying we - as "aware selves" - are constructed via that very process. What gets constructed are habits of belief that are models of selves in worlds. So there is no soul that pre-exists the modelling relation. That kind of personalised psychological point of view is what emerges in forming notions of "a world out there".

    So we switch to not even expecting the phenomenal to have access to the noumenal. The phenomenal is the system of sign, the Umwelt, which is producing the "self" along with its "world".

    In this light, knowledge is all about the development of those kinds of regulatory habits. It is not about subjective, nor objective, truth. It is about the production of a subjectivity in contrast to an objectivity - a modelling relation which embodies a separation of "self" and "world". And that separation is what a system of sign mediates. The outside physical world becomes symbolised in terms of internal goals and desires.

    So, contra simplistic empiricism, all sense data is simply acts of measurement. A self-interested transformation of material energy into self-interestedly meaningful data has already happened as soon as sensations have "entered experience".

    So what sense reveals to us isn't finite because it is somehow partial, or lacking in omniscience. It is finite in the sense of already reflecting some useful structure of selfhood. It is the world as it could make sense to the habits of interpretance that have developed to produce some focal "us".

    Your scheme seems basically Cartesian in its dualism of mind and world. You accept that there is this stuff called "mind". And God has the all-sensing version. We have a limited embodied point of view. Animals lack something essential - the intellect or reasoning soul - and so have a very dull and extremely embodied perceptual experience.

    But my position is quite different from that. I would take the naturalistic view that we are talking of different grades of semiosis - principally the evolutionary advances of genes, neurons, words and numbers.

    So the Aristotelian intellect is the product of evolution reaching the level of semiotic modelling which we would recognise as discursive and rational. That is, it is semiosis mediated by words, then numbers.

    And this socially constructed understanding of human evolution does map comfortably to Aristotle's notion of the rationalising intellect as something extra even to the sensing soul. It is just that rather than a cut-down Godhood, it is about the kind of "self" that a new level of "world making" will produce.

    Humans - as discursive selves - are the product of sociocultural systems. It all started with symbolic language that could encode social ways of thinking. There could be an institutional memory, and hence the rise of social institutions and their socialised participants. There emerged a higher state of being or mind - the cultural-linguistic one. But it was a purely natural development, not any kind of divine shift.

    And then - the Greek flowering - we had the further development of a logical discursive self. Mathematical-strength discourse paved the way for mathematical-strength social institutions and mathematical-strength selfhood.

    In some sense, this was a depersonalisation or objectification of viewpoint. As animals selves, we are highly embodied in our own immediate biological concerns. As cultural selves, we start to become disembodied in our point of view to the degree that we take on some higher level institutional view of how we ought to behave, what we ought to think.

    And then now we have the possibility of a logical or rational "self" in a logical or rational "world". Again, no mystery. Just the kicking of semiosis up another level of abstraction or objectivity. But - as philosophy makes clear enough - trying to live as a self of that world is a little dicey.

    In human history, the turn to institutional rationalism was a powerful next step. We could both mechanise society and bring nature much more under our technological control. But the new "self" that this new "world" eventually creates stands in question.

    Anyway, getting back to Aristotle's intellect, the naturalist view is that this isn't talking about another step towards omniscient godhood and true knowledge. It is instead the direct continuation of a natural trend - the evolution of semiosis. And the reason Aristotle would have seen the discursive intellect as somehow coming from somewhere beyond the embodied and sensing animal soul is that its form indeed does come from the "beyond" that is human cultural development, with the "self" and the "world" that emerges there.

    What we are interested in as humans is to know being as it reveals itself to us. To the extent that we can "model" it with a system of comprehensible signs, we make it easier to respond to. Still, to the extent that we confuse our models with reality, to the extent that we think our "reduced" world is the real world, we are guilty of Whiteheads Fallacy of Misplaced Concreteness. The real world is not our model and it is always ready to hit us with a shocking surprise to prove it isn't.Dfpolis

    Yeah. But what I'm saying goes beyond that. I am stressing that the system of signs is Janus-like in that it encodes both "the real world" and "the real us". So experience is a reduction to a model that results in the twin emergence of some crystalised sense of "out there" vs "in here". And the fallacy of misplaced concreteness would be to think that this constructed selfhood is any more real than this constructed world.

    Now this seems to return us to an argument that only the noumenal is the real, the phenomenal is some kind of generalised illusion. There is the hard reality that is being modelled, and then this afterthought, the modeller, whose very actions of modelling are constructing "himself".

    And this could be the reductionist understanding indeed. There is something true about it, as naturalism would lead you to argue.

    But here is where I would personally embrace the more speculative metaphysical turn that is pan-semiosis. This is where we go beyond the naturalistic explanation of life and mind as modelling systems and begin to understand all physical reality as a self-organising evolution of an intelligible sign system.

    Check out current physics - with its information theoretic turn in particular - and pan-semiosis seems the case. The "reality" of information has become a standard "material fact". So as speculative metaphysics, it ain't so whacky. Science has already gone there now.

    But regardless, my main point here is that what Aristotle meant by the "intellect" maps very nicely to what we would understand about the social evolution of the human mind. And it has nothing to do with any approach towards a transcendent and absolute state of knowledge. It arises directly as a continuation of pragmatic semiosis. There was a jump to a new level of self-making and world-making with the invention of words and then numbers. Codes create memories and memories create institutions. Organismic behaviour could rise up another level of self-organisation - the ones we call social culture and then science and philosophy.
  • Did Descartes Do What We Think?
    In other words while he continued to know that he was in his chamber, he chose not to believe it.Dfpolis

    Surely it's the other way around. He believed he was in his chamber. And what he felt he knew - by rational doubt - was that was just in fact a belief and no more.

    So simple empiricism - the evidence of the senses - has a problem when it comes to being "knowledge". It is quite plausible that any sensory evidence is some kind of dream or illusion. Psychology already reveals that. And logically, it is not impossible that an evil demon insures this is the general case.

    He extended this to even his mathematical imaginings. It might be the case he believes a square to have four sides - as he can picture a square in his head and count them. But that might also be a continuation of a deception.

    To follow this line of thought to its natural end, he had to further suppose the evil demon would never let up. The fact that life couldn't operate unless he allowed himself to fall back into old habits - like just accepting his chamber was actually there and getting on with daily life - was still just sensory evidence and so still doubtable for this reason.

    Rather, it involved a willing suspension of belief,Dfpolis

    Explicitly the opposite. Descartes had to posit a relentless evil demon as the reason why it was logically possible he could be deceived, despite his wishes otherwise.

    So, if knowing is not a species of belief in this view, what is it? It is what Aristotle described in De Anima iii -- the actualization of intelligibility -- or, in more phenomenological terms, the awareness of present being.Dfpolis

    Well "the awareness of present being" is a hopelessly ambiguous term here.

    The correct answer in my view is the Pragmatic/Semiotic position taken by scientific reasoning. Descartes was essentially right. But that then means knowledge becomes founded on pragmatic belief. We have to take a chance, make a guess, form a hypothesis that is our belief. Then we see how operating in that light fares. We find out how false it is in practice. Truth becomes whatever stands the test of acting in the world as we feel we understand it to be.

    This is good psychology. It is how brains function. Minds are pragmatic models of the world - a system of signs or an Unwelt, and not some kind of veridical direct representation as is usually naively presumed.

    So Descartes problematised knowledge, highlighting that it couldn't in fact rise above belief.

    Then eventually the modern scientific epistemology emerged. With semiosis, we can realise that knowing isn't even about having a true belief, but about having a pragmatic one. The whole question of whether we can "see things as they actually are" becomes the charade. What we are in fact interested in - as modellers - is to reduce "the world" to an easily understood system of signs.

    The view we are constructing - biologically and socially - is not of the "world out there", but of "the world with us acting in it". And that is a big step on from the simplistic representationalism that Descartes helped problematise.

    So knowledge becomes about certainty over our possible courses of action. We are judging our ability to act in "the world" in a way that conforms to our long-run expectations.
  • Describing 'nothing'
    Definitions are simply guides, but use tells us much more.Sam26

    Hah. And how do metaphysicians, logicians, mathematicians and physicists use the word?

    But anyway, I would highlight the metaphysics built into your ordinary language examples - the way they rely on a simple classically-imagined counterfactuality ... a world composed of things. Some thing either exists as a propositional fact, or it doesn't.

    So nothing is just understood as "not one thing" - its etymological derivation.

    Say not one thing.
    I did not one thing.
    There is not one thing there.
    Your book said not one thing.
    Not one thing is easy.
    I have not one thing.
    I admit not one thing.

    This is a very restrictive understanding of "nothingness". And to the degree that one lacks the logical resources to challenge its dependence on simple predication - a calculus of particulars - one really can't hope to rise above an ordinary language confusion about what could be usefully said.
  • Describing 'nothing'
    IOW "it" could be (is the possibility of being) anythinggurugeorge

    but that still leaves a generalized possibility.gurugeorge

    I am agreeing that pure indeterminate potential - the possibility of anything - is a form of nothingness. But what I am arguing is that nothingness comes in two complementary forms. So confusion arises in trying to collapse the two into the one. There is a higher level of metaphysical insight in seeing how there is a dichotomy at work here.

    And it is recognised in Peircean logic.

    There would be nothingness understood as absolute generality - that to which the law of the excluded middle does not apply. And then nothingness understood as absolute vagueness - that to which the principle of non-contradiction fails to apply.

    So somethingness is the middle ground state where the law of identity does apply. Somethingness is the particular, the definite, the individuated. And hence the PNC and the LEM do apply to somethingness.

    But the LEM does not apply to the purely general. The general is empty of difference. It is the nothingness of a sameness. So that is one extreme way to arrive at a state of nothing.

    Then the PNC defines the nothing that is the vagueness of the indeterminate. It is neither the case, nor not the case, that something exists - exists even in the sense of a definite possibility.

    You have the further dichotomy of the possible and the actual that rather confuses this discussion. I would point out that all definite possibilities are so because there is some definite context that makes that the case. Some general state of constraint must be in place such that a possibility has an actual form. So while a possibility is not yet actual, it could become the case because the circumstances are already actual. It exists, or is an individuated particular, in that sense.

    So there are further subtleties at work here. A classical notion of possibility relies on the definiteness of a context. And we know that a quantum approach to possibilities sees that kind of classical counterfactuality breaking down in a “spooky” fashion. QM is about real indeterminacy. Real vagueness.

    So to talk about nothingness, we have to get way beyond a classical conceptual framework which talks simply about empty space - gaps that might be filled. That kind of talk is already imagining a world of crisply individuated particulars ... and the counterfactuality which then parasitically becomes their imagined absence.

    That classical conception is fine as far as it goes. But to get at the more absolute version, logic itself points to the answer. If individuated definite somethingness is that to which the three laws of thought apply, then what is less than that, or beyond that, is that to which the laws don’t apply. Which is the absolutely vague or the absolutely general.

    So yes. The usual opposite to the idea of a metaphysical void is talk of a metaphysical plenum. And this everythingness is both the “other” of nothingness and also as good as a great big nothing itself in lacking any proper differentiation. There is a reason why metaphysics began with the idea of an Apeiron or unbounded potential, a chaos of possibility. A lack of coheherent differentiation makes a good starting point for a creation story.

    But talk of possibility - to the degree it is still talk of some actual state of counterfactuality - is still dealing with nothingness at a classically particular level. It encompasses neither the vague nor the general in its attempted conceptualisation.
  • Describing 'nothing'
    'Nothing' is defined.unic0rnio

    Well, weren't you trying to define it as even the absence of a definition? That inclusion of an epistemic criteria already gets you into the problem that "nothingness" thus becomes a point of view. The view from "somethingness".

    And I simply say that one ought to go with that. It becomes an issue of what we can say about points of view themselves. You have to work within that framework, not pretend to stand outside it. This means to define nothing, you must define it in opposition to its proper "other". You have to work with the internalist perspective you are given ... as you can't escape it.

    First time I'm hearing of it and from what you've explained I agree with it fully.unic0rnio

    So, as I was saying...

    ...The Kyoto School might even be thought of as recovering a suggestion from one of the first Presocratic philosophers, Anaximander: namely, to think finite beings as determinations, or delimitations, of “the indefinite” or “the unlimited” (to apeiron)...
  • Describing 'nothing'
    Nothing is the absence of anything, even a definition. Therefore, for a true nothing to exist, every possibility must exist at every time but never any one at any particular time. These circumstances would prevent the nothingness from being defined and it would remain nothing.unic0rnio

    I would suggest you are mixing up two alternatives that together give the more complete ontic view.

    So nothingness can be defined as the definite and actual lack of anything. A true state of nothingness lacks possibility as all possibilities are what have been removed. It is an emptiness. And it ought to be somehow beyond any particular kind of spatiotemporal container. An empty box still leaves the problem of their being a box.

    Now if that is our best image of true nothingness - the absence even of possibility - then what is the opposite of that. It would be a state of everythingness. If every possibility is being freely expressed, all at once, no matter if the possibilities might seem to contradict, then that is also a kind of perfect nullity as nothing in particular has any clear existence. Even spacetime might be regarded as having an infinity of directions rather than just three spatial dimensions bound to a thermal direction of action. So it would be just a mass of fluctuations going off in orthogonal, disconnected, directions with no coherence. A white noise of possibility with no concrete actuality. A vagueness or Apeiron.

    So a definition of nothingness doesn't really make sense by itself. We are only really standing in our state of definite somethingness and noting that we can create the kinds of emptiness that are definite existence drained of all further possibility. A container without contents.

    Then we can flip that over to imagine an opposite bound to a state of definite somethingness. That now becomes an absolute everythingness that is like a contents without a container. There is every unstable possibility, and yet no stable actuality.

    Believe it or not, this is progress. Metaphysical furniture understood in terms of a mutual definition means we at least can be sure that where we are at lies somewhere inbetween the two extremes we have just identified. Metaphysical work can begin.

    So on the one hand, our condition of observed somethingness is bounded by a state of "pure empty container". At the other extreme, it is bounded by the opposite condition of a state of "pure unbounded content". We exist measurably between two limit states. And that ought to map to the real world in some fundamental way.

    One way it does is to standard Big Bang~Heat Death cosmology. The Big Bang, on many accounts, started in a state of quantum flux - a pre space and time roil of "quantum foam". So pure unbound content.

    Then the universe is heading for the opposite of a Heat Death. It will expand and cool to become as empty of energy density as possible. So it will be a container without contents. In some sense, it will be a generalise spatiotemporal container. But now it will have only the minimal quantum sizzle of zero degree radiation. A virtual stuff in fact. Time and space will lose any meaning as expansion will have ended, energetic interactions cease to happen, in any real sense.

    So the question is not "what is nothingness?". It is what does our attempt to conceive of nothingness then direct our attention towards. What is its "other" that we might have been missing. Having found the two possible extremes that bound what we understand as the actual somethingness of physical existence, then a larger evolutionary story can slot into place where the Cosmos is the transformation of the one into the other in some useful and measurable sense.
  • Physics and Intentionality
    I am merely saying that phenomenologically speaking, from the perspective of the ordinary unreflective individual who would never automatically, and without considerable education, begin to interpret experience in terms of signs, affect is basic.Janus

    Then all you are saying is that people brought up in contemporary western culture would learn to say these kinds of things as that reflects folk epistemology. It is hardly fundamental.

    On a different analytic perspectives we could say that semiosis is fundamental, or we could say that semiosis and experience are co-arising,Janus

    Well I am arguing that ontologising experience as a semiotic process is the fundamental epistemic move. And this stands in contrast to ontologising it as a substance. So I am taking a hylomorphic and semiotic position on the question.

    If you too are rejecting a substantive sum in regard to experience, you would need to communicate that in your choice of words, the direction of your arguments.

    As it stands, I haven’t seen that. You still want to make experience - affective experience - basic. And then say at worst it is a chicken and egg situation.
  • Physics and Intentionality
    (The noumenal would be more properly what is real but not revealed to us, and hence kind of irrelevant to this discussion).Janus

    Huh. Internalism makes no epistemic sense without the assumption that there could be the external as its other. So given this is about the foundations of epistemology now....

    But the point is that they are united by a common form of experience which gives rise to the possibility of a common system of signs.Janus

    I was saying it is the other way round. Otherwise this ontologises experience as substantial being.

    t is the affective aspect of experience that is really determinative;Janus

    And what is that founded on except some process of neural semiosis? We have our evolutionary biology in common. We grow up in the same physical world. So sure, it may ban aspect of neurocognition. But once we are talking about affect as a rational semiotic process, it is the mechanism of the sign relation we are making ontically basic.

    You can’t have your cake and eat it. If you want to make experienced affect basic, then you are talking a very different story. The usual one of substantial being and not semiotic process. So you have to decide which horse you back.

    What you say is true of 'a linguistic self", but there is a deeper pre-linguistic sense of self and other, upon which the linguistic self and other is parasitic, and without which it would be impossible, and that is what I am referring to.Janus

    Sure. I’ve said that a thousand times. The vertebrate brain is based on the forward modelling that makes a self-world distinction basic. We have know it is our head that turns and not the world that spins. But the animal sense of self is not an introspective one. It lacks that social structure.

    It's not my favored conclusion, but my favored inclusion. You, unfortunately, have your diametrically opposed favored exclusion; which leaves the fullness of your account severely wanting. Art is (or at least can be) much more than what you say, but for you to see that you would need to experience that 'much more'. Hopefully one day you will.Janus

    If you can’t deal with reasonable arguments then best you don’t reply.