Comments

  • Against All Nihilism and Antinatalism
    ook at a baby human versus that of many other mammals. The baby human is the most defenseless. Why? Very few innate behaviors. Also, the epigenetics and the learned behaviors of other animals also have an instinctual component that is not driven by the much more generalized learning process that humans posses via linguistic/conceptual brains.schopenhauer1

    Since you asked, Schop, I agree with Pseudonym that you seem to be trying to draw too sharp a line here. It doesn't make sense to argue that Homo sapiens abandoned neurobiological instinct for socially-constructed desires. Sure, socially-constructed desires radically change things for humans. Yet the underlying biology continuity still exists and we can argue that linguistic culture largely serves to amplify that evolved instinctual basis rather than to somehow completely replace it.

    Yes, it is possible that humans evolved to be less instinctual so as to be more open to cultural shaping. But I don't think there is much actual evidence of that being the case.

    Humans are born more helpless - their brains a mass of still unwired connections - because we happened to become bipeds with narrow birth canals trying to give birth to babies with large skulls. The big brains were being evolved for sociality and a tool-using culture. So babies had to be squeezed out helpless and half developed, completing their neuro-development outside the womb - a risky and unique evolutionary step. But also then one with an exaptive advantage. In being half-formed, this then paved the way for the very possibility of complex symbolic speech as a communal activity structuring young minds from the get-go. It made it possible for culture to get its hooks in very early on.

    Of course this evolutionary account is disputable. But it seems the best causal view to me. And while it says that there was undoubtedly some evolutionary tinkering with the instinctual basis of human cognition - we know babies have added instincts for gaze-following and turn-taking, stuff that is pre-adaptive for language learning and enculturation - you would have to be arguing for a more basic erasure of instincts that are pretty fundamental for the obvious evolutionary reasons that Pseudonym outlined.

    It is natural that animals would have an innate desire to procreate - have sex. And it is natural that animals would have innate behaviours that are particular to whatever parental nurturing style is their ecological recipe for species success.

    These in turn might be highly varied. There are many possible procreative strategies - as you know from discussions of r vs K selection.
    http://www.bio.miami.edu/tom/courses/bil160/bil160goods/16_rKselection.html

    However we can make reasonable guesses about what the human instinctual basis was, and remains. Certainly a desire to have sex and an instinct for nurturing are pretty basic and hormonal. Which is enough to keep the show on the road so far as nature is concerned.

    Now arguing in the other direction, I would agree that this hardwired biology is not of the "overpowering" kind popularly imagined. Culture probably does have a big say. As society becomes a level of organismic concern of its own, it can start to form views about what should be the case concerning procreation. The drivers might become economic, religious and political - these terms being a way of recognising that society expresses its being as economic, religious and political strategies.

    And likewise, society might wind up turning individual humans into largely economic, religious or political creatures. We might really become incentivised to over-ride our biological urges as a result of the direction that cultural evolution is taking. This may get expressed in terms of the full variety of r vs K strategies. We might get the range of behaviour from Mormons or other cultures of "strength through big families" vs the economic individualism which turns supporting a family into a financial and personal drag (with the individual now becoming, in effect, a permanent child themselves - never wanting to grow up and so creating a new dilemma for the perpetuation of that society, as is big news in Japan).

    So, in my view, it is too simplistic to draw a sharp line between biological instinct and linguistic culture in humans - especially when it comes to any hardline anti-natal agenda. Although there is certainly this added level of evolutionary complexity in play with Homo sapiens.

    We are at an interesting time for humans. Society has shifted from an agricultural basis to an industrial one, and now believes it is entering an information age that really cuts itself off from its biological roots. So culture is churning out individuals with psychological structures that express that current stage in its development.

    Can that mindset flourish and last? Is it realistic or out of touch? Can a society predicated on life-long infantilism survive?

    It might, if we can all afford robot slaves and crack the fusion free energy problem, etc. Anything remains possible - that is, if you don't pay any attention to the underlying economics of biological existence itself. The bottom is surely about to fall out of that dream - that we aren't simply a species gorging on a short-lived windfall of fossil fuels. But that's another thread.
  • David Hume
    You have to rely on the assumption that the future will be like the past in order for past evidence to be relevant to the future. Which is assuming the conclusion. I don't object to you doing it, I just object to your claim that is is reasoned.unenlightened

    The reasoning might not be purely deductive, but it is scientific reasoning that thus includes a deductive element.

    So a full account of the reasoning would go that we start with an abductive step - a guess at a causal mechanism. Then we deduce the observable consequences. Then we tally the inductive confirmation.

    As I argued earlier, a belief in induction is justified by a guess at a mechanism - history builds constraints on free possibility. Then from that, we can deduce the observable consequence - the past can be used to predict the future in this fashion. We will see this causal mechanism at work. And then observation - of the success of this inductive approach - will be inductively confirmed (or not).

    The Hume/Newton thing arises out of a particular metaphysics - a belief that the world might be deterministic, atomistic and mechanical. But rather paradoxically, that nominalism created some serious realist type problems. It couldn't account for the existence of backdrop dimensions, such as space and time. It couldn't account for how physical events were ruled over by natural laws. It couldn't account for gravity's action at a distance.

    The background metaphysics that Hume relied on to motivate an argument was so patently full of nominalist holes that it never was a complete story. Yes, it did speak to a deductive consequence of its axiomatic hypotheses - all the guesses about determinism, atomism, etc. And that part of the belief system could be inductively confirmed in its own terms. But then it also relied deductively on a set of unobservables - like the laws, and space-time, and action at a distance. These "must" exist according to the metaphysical set-up, but they could never be directly measured or shown. So they could not be inductively reasonable as such.

    So Hume was making points that seemed appropriate in a particular metaphysical context. They were "reasonable" in his day. But if we are talking about a modern scientific view of reasoning, then introducing the third thing of abduction, or an axiomatic leap of the imagination, makes a big difference. It says knowledge works quite differently from how the traditional conflation of deductive logic and rationality might want to represent it.

    The sharp line folk tried to draw between the rational and the empirical was too strong. All reasoning relies on a mix of both. And indeed, the mixture is triadic.

    You have forms of induction book-ending the process. Abduction is the generalisation step, the inference to the best explanation. It begins as vague or hazy intuition and snaps into crisply expressed hypothesis.

    After that, deduction can kick in. It has something to work on and can do its syntactic, rule-bound, thing. But while the consequences of deductive argument carry the stamp of certitude (valid is valid), it is still a case of garbage in/garbage out. The hypothesis might be wrong, or more likely just part of the story. So nothing is reasonable until it is measured against the world it pretends to model. Induction from the empirically particular back to the abductively general has to close the loop, confirming the initial guess is right (or right enough for all practical purposes).

    So "reasonable" should mean reasonable in that full scientific method sense. And induction itself seems reasonable in that light. We can guess at a mechanism - the accumulation of constraints. We can deduce the observable consequences. We can measure the degree to which nature conforms to our model.

    And as I say, we then encounter the ways that nature doesn't in fact conform - as in the frequency with which abrupt or catastrophic changes do occur in the world. So at that point, we need to go around the loop again. We realise that we were presuming a linear world. We need to develop a larger model that relaxes that constraint. That leads us towards non-linear models - non-linear models being more generic than linear ones.
  • Thoughts on Epistemology
    Whether a doubt feels vaguely intuited or crisply expressed is a separate issue. And one that makes no essential difference except that a doubt has to be made concretely counterfactual to achieve the status of being part of a rational argument.
  • A Way to Solve the Hard Problem of Consciousness
    Has the Cartesian dualism inhering in the earlier generation of cognitivism, with its sharp dichotomizing of affect and cognition, been overcome by what you have called the general physicalist monism of the newer approaches(and some not so new ones)?Joshs

    Fair enough. I am biased because I particularly sought out those who I felt took a properly integrated view of the issue.

    So yes, the Platonic tripartide model of humans - reason as the charioteer trying to control a chariot pulled by the two horses of the higher passions and the baser instincts - still has a lot of cultural pull. And a computational turn in psychology really did foster a brain in a vat view.

    For me what is crucially at stake is the ethical-psychotherapeutic possibilities opened up by a way of thinking.Joshs

    Heh. So now this is yet another direction. I'm not sure what you have in mind exactly.

    It seems to me that today one set of cognitivists gather on the conservative, that is , Nietzschean-Darwinian, side of a philosophical divide, while a much smaller group(Shaun Gallagher, Varela, Jan Slaby, Matthew Ratcliffe) attempts to incorporate into their thinking phenomenologists like Husserl, Merleau-Pony and Heidegger , and poststructuralists like Foucault and Lyotard.Joshs

    Well, I wouldn't be on the side of the PoMo johnny-come-latelies. Another personal bias. I am with the structuralists rather than the post-structuralists. :)

    But also my views are based rather directly on the science. So I am dealing with concrete neurobiological models and empirical evidence. SX is probably more into the philosophical politics here.

    I'm sure you're aware that this latter group of philosophers is responsible for the ongoing assault on the presuppositions of physicalist science.
    We are accustomed to dealing with this from humanities and cultural studies departments, but a gentler version of it is now being put forth by those calling themselves cognitive scientists.
    Joshs

    So you are saying that the PoMos are attacking the physicalists and the Darwinian conservatives are attacking the PoMos?

    It seems what you would be saying, but it reads a little hazy.

    Would you agree that the philosophical understanding of concepts like quality, quantity and essence have undergone continual shift over the past centuries, and those changes in understanding make their way subtly into the empirical descriptions of scientists?
    If so , then I would expect that the notion of quality as a 'general theoretical essence', which is about as Cartesian a definition as I can imagine, has room for updating.
    Joshs

    Again, I'm unclear about your point.

    But what I'm thinking is that science - to be quantitative - has to in the end just operationalise the metaphysical qualities it seeks to explain. And this is not necessarily a bad thing. It seems an honest thing.

    So science invents these terms like energy or entropy that are essentially pretty meaningless. They sound like science is speaking of some material substance, but really the terms just become place-holders for something that is a constant factor or a conserved quantity. It seems like there is some stuff. And thinking that way allows for a system of measurement which speaks about quantitive variations in that stuff - differences in its location, form, amount, etc. Yet the scientists don't really believe the stuff has the substantial being that giving it a solid-sounding name implies.

    This has become really obvious now with the information theoretic turn in physics. Now science just shrugs its shoulders and say we can count primal bits. We can throw away all the materialist presumptions and treat the purest quantification - a 0 and a 1 - as the quality, the conserved stuff, that we know how to measure.

    To me, that's pretty sophisticated. Especially in contrast to the qualia talk that kind of does the exact opposite for philosophy of mind. It doubles down on materialism by adding a mental stuff to the physical stuff.

    So I see the information theoretic turn as a paradigm shift that can get science out of a material monism without then falling into a Cartesian dualism.

    But I doubt that was the angle you had in mind.

    So again, is there a question here? Perhaps the PoMo take on a phenomenological alternative to mainstream neuroscience's Darwinian naturalism concerning embodied cognition offers important psychotherapeutic results. That might connect the dots. However if that is the line of thought, I'd need more details to have a view.
  • Thoughts on Epistemology
    Whenever something doesn't seem quite right, there is cause for doubt.Metaphysician Undercover

    But how could you know something wasn't quite right unless you were making a prediction that it would be otherwise in some sense?

    So yes, the prediction might not be a vivid and specific expectation - an attention driven prediction. But it could still be a prediction in the sense that you have some habitual expectation about things, and then that more general expectancy is the background against which surprises can pop out and catch your attention.
  • Thoughts on Epistemology
    No amount of clarification of terms will overcome that fundamental untranslatability of language.Joshs

    But when you say something that actually reaches past a conventional or habitual level of understanding, isn't that the feature rather than the bug? Isn't that how philosophy or understanding generally manages to stay creatively open and progress?

    So every sentence of any interest remains open to fresh interpretation - even to oneself. We can twist it and turn it in the light to see new possibilities of what might be meant. The meaning is not fixed but already open to another point of view.

    This is one of the things that flips a theory of truth on its head. Language is not a system of frozen meanings, petrified semantic commitments. At the creatively open edge of reason, even the same sentence can be understood many slightly different ways by its own speaker. And that is a good thing. It is how language can both stretch itself elastically while also aiming at some tightest possible fit.

    I see that as the dichotomistic tendency of Grayling's OC1 vs OC2 which Sam cites here. Plasticity vs stability. Novelty vs habit. A basic relational freedom combined with the possible discovery that there is some eventual metaphysical-strength limit.

    All this talk about belief vs doubt. Sam was saying something that didn't make much sense to me about neural states. But the neurobiological story of the brain is how it is organised by the dichotomy of the habitual vs the novel.

    If we want proof that knowledge is built on a "background" of unquestioning belief, then we can read that story into the way the brain is founded on the accumulation of useful and embodied habits. And then in complementary fashion, the brain is also designed to "doubt" - apply its attentional resources - whenever this general backdrop of belief fails to predict the world in suitable fashion.

    So a naturalistic basis is right there to be seen. However its logic is dialectical. Which is where things start to go all uncomfortably Hegelian for some. :)
  • Thoughts on Epistemology
    youve taken what should be a common curtesy towards Sam as the instigator of this thread and turned it into an excuse for your not starting a thread Of your own.Banno

    Dry up Banno. I made posts that addressed his points about neural states and attempts to find a grounding in something inarguable because it is "natural". If we are now discussing red herrings like whether Paris is the capital of France, it is because of your efforts to deflect from the pragmatism towards which Wittgenstein was moving.

    There is actually an amusing contrast here. Peirce started off as a quietist and then became keen on a metaphysical-strength epistemology. So how that pans out could be instructive for someone actually wanting a foundational story.

    If you are not interested, fine. Butt out.
  • Thoughts on Epistemology
    Yep. Whenever it comes down to it, you don't actually have an argument. It was simply a posture.
  • Thoughts on Epistemology
    If you answered things straightaway then life would be simpler.
  • Thoughts on Epistemology
    That's the trouble with pragmatism. It does not address questions of truth. It pretends they are all questions of justification.Banno

    That's true.

    But can you explain to me what the difference actually is as far as you are concerned.

    Of course, pragmatism doesn't actually pretend its all just justified belief. Just like it doesn't deny the world exists in some fashion that is separate from our desires and conceptions. So it certainly addresses the question of truth head on and gives its pragmatic answer. But again, the stage is yours. Tell us what the critical difference is here, using the example supplied.

    To remind, "God created the earth and mankind, the Big Bang never happened" is true IFF God created the earth and mankind, the Big Bang never happened.

    So who speaks the truth here, and in what way is that so?
  • Thoughts on Epistemology
    "P" is true IFF P.Banno

    So let's take a more useful example to flush out what you could possibly mean by epistemic justification.

    "God created the earth and mankind, the Big Bang never happened" is true IFF God created the earth and mankind, the Big Bang never happened.

    Fine. In the most question-begging way conceivable, we have set out a truth condition.

    But now how would you go about cashing that proposition out? If you claim to be interested in epistemology, then start doing some.

    We have two convinced schools of thought - the creationists and the cosmologists. How does "Paris is the capital of France" as your prototypical example of commonsensical truth apply in sorting out how doubt and belief ought now to proceed here.

    If you were actually saying anything helpful in pushing that example, its usefulness will be made clear in your very next post.
  • Thoughts on Epistemology
    That's as close as can be got, and I have said it to the point of tedium.Banno

    And to the point of tedium, you won't discuss the informal acts of measurement that are needed to show such truth in practice. So same old same old. You leave out the "self" that is needed to give propositions any grounding purpose and any natural limits to their concerns about errors, exceptions or doubts.

    And of course, that extra stuff is central to making sense of such different classes of proposition as Paris is the capital of France, and here is one hand, here is the other.
  • Mental States and Determinism
    Some Brief Arguments for DualismWayfarer

    Rather that's a brief argument for semiotics and the epistemic cut.

    Yes, it shows that there is a separation of our "minds" from "the world". The interpretation of marks is separate from the physics of the marks. And in turn, that informational separation is how interpretance can arise to regulate the actual physics of the world with some purpose in mind.

    But actual dualism is avoided by there being that living connection - the feedback loop which connects the two sides of the modelling relation. The habits of interpretance can only survive to the degree they do useful material work.

    So the mind is actually free or transcendent. It is embodied and rooted in an ultimately physicalist purpose.
  • Thoughts on Epistemology
    Indeed, as you were so forthcoming when asked if it is true that Paris is the capital of France.Banno

    That's a little lame when you wouldn't give a definition of what "truth" might be taken to mean in your view.

    I agreed it might be tautologically true according to some social convention. And I pointed out how inadequate such a definition of "true" might be in any sensible debate about realism - as might hinge on Prof Moore and his flapping hands.

    But as usual, when faced with an actual argument, you went radio silent for a while. And now re-emerge clinging onto this as some unanswered winning remark you might have made.

    You can always go back and address my actual replies. But I know you won't. It's all impression management as usual.
  • Thoughts on Epistemology
    Notions of absolute truth were laid to rest at the start of last century, with Moore and Russell's criticism of Absolute Idealism.Banno

    Hah. Well there is certainly still something in Hegelianism. But it took Peirce to make the case that there is no direct correspondence between the reasoning subject and the objective world. The mediation of the relation by signs pretty much ensures that there isn't - as the comprehending self and its comprehended world arise separately from the world in itself.

    So what you are expressing here is some personal prejudice about what have been the twists and turns in the development of epistemology. Moore and Russell hardly ended anything. They were already blundering into logical atomism.

    The world is too complex for one Grand Scheme to provide us with The Truth.Banno

    Oh dear. Again that may be your impression, but after checking out the great variety of epistemologies on offer, I am repeatedly surprised by what a robust scheme Peirce arrived at.

    So you have your view. I have mine. The difference is mostly that I am prepared to supply the arguments and evidence for mine. You instead have adopted the easy position of the arch-sceptic. You can just keep saying "I doubt that very much".

    You even seem proud that you won't even read anything about Peirce when it is offered. It's a funny attitude to encounter. But variety is what I enjoy.

    Pragmatism says nothing of the truth of love, beauty, courage, respect. It is a philosophical sideline.Banno

    Hmm. But pragmatism done properly speaks directly to the values of the "self" that is doing the philosophising. I keep point that out to you. It puts the other side of things - the self that hopes to discover itself in its world - in the limelight.

    Of course, this is a fundamentally anti-Romantic and anti-Transcendental enterprised. A lot of folk - you too apparently - don't like that mystical side of life being called into question and treated as a scientific inquiry.

    Yes. I can see how it might seem to threaten Philosophy. Science has taken over metaphysics pretty much entirely, and now it is back for the rest. :)

    But to me, that is what progress looks like. And I'm always willing to make the argument in full. I don't need to hide behind ambiguous non sequiturs and one liners.
  • Thoughts on Epistemology
    Perhaps talk of absolute truth lead philosophers astray, so that they threw out good old plain ordinary truth along with absolute truth.Banno

    But that's not really the issue, is it. Yes, of course, it is impressive that we seem to find it pretty easy to deal with everyday "truth". Agreeing on the facts is simply about mastering the right social habits.

    So where philosophy begins is when we want to move on to an epistemic theory that itself is "true", or at offers an analysis of the best way to go about things. This is basic to moving away from the everyday socially-constructed forms of knowledge and establishing an epistemic method that can be extended way beyond into the realms of the metaphysical even.

    The search for that ideal epistemic method is hard and ongoing. But we can see that it has largely cashed out as pragmatism and the scientific method of reasoning. And philosophy as a training aims to foster the critical thinking skills which are involved in applying that epistemology.

    So you can continue with your anti-metaphysical griping. It counts for nothing. Metaphysics is alive and well. In scientific circles anyway. :)

    The correct employment of doubts (and beliefs) is an issue. But just as obvious is that most folk have no trouble distinguishing between the everyday socially constructed truths (like Paris being the name given to a city that has also been designated a nation's capital) from the philosophical issues surrounding epistemology itself.

    In conflating the everyday with the deeper story, you not only show a failure in critical thinking, you also wind up excluding what is actually fun and interesting about metaphysical level inquiry. And that makes for a dull life, wouldn't you say?
  • Thoughts on Epistemology
    One might be things we are cause to take as indubitable - Sam, from the OP.

    Another might be constitutive rules, which might be doubted outside the game they help constitute.

    A third might be propositions that are shown, such as "here is a hand".
    Banno

    1) The first is too strong. Even an axiomatic or grounding supposition needs to be doubtable to be believable. It has to be framed in a way that has an explicit contradictory - a counterfactual axiom - to even have any explanatory bite and not merely get classed with the set of propositions that are “not even wrong”.

    So of course we choose axioms on the grounds of being the least doubtable. That is how they can be the most believable. But the fact that the whole business is founded in this counterfactual game means that the necessity is more about the necessity of just making some abductive leap. We can always circle back to have another go at the axiomatic basis if the results of the axiomatic system don't seem to be working out so well ... on pragmatic or empirical grounds.

    2) The second is right in emphasising the need to just make some rule to get a game of inquiry going. Even a bad guess is a good guess so long as it is a definite guess - one that is crisply counterfactual in its framing.

    But what we need to avoid is the suggestion that the game is arbitrary. The game is going to be judged in terms of the purpose it accomplishes. So there is that global constraint, the empirically-grounded one, that feeds back to say something about the quality of the grounding axiomatic choices.

    And also, more needs to be said about the epistemology of abduction. Peirce already made the mathematical argument that the history of the universe is not long enough for humans to have made even a few right basic guesses at random. So we need an explanation for why our guesses tend to be rather good. And the reason is a non-linearity, as illustrated by the exponential ability to discard alternatives in the classic 20 questions game. If we can cut the total number of possibilities in half at every step, not just knock them off one by one, then it is much less of a surprise that we can employ a dialectical logic to generate our metaphysical axioms.

    We arrive at constitutive rules via dichotomies for a good reason. Non-linear search beats linear search by a country mile^2. ;)

    3) The third is the baffling one. It seems like an attempt to be empirical. It claims - in pointing to something particular - to demonstrate the existence of some grounding or backdrop context.

    But while that background of belief exists, it would be better just to point to it directly. Everyone of course can see by your actions that you presume a certain dichotomy of self and world to be basic. But better to say that directly - put it verbally on the table for forensic examination - rather than try to get away with some ostensive demonstration.

    It feels always too much like an attempt to evade real debate than to answer the sceptic. To grunt and point ain't ever as good as presenting a proper epistemic theory.

    So yes. There must be some general ground of belief that then makes all further doubts crisply local and particular. That is standard pragmatism, and standard psychology. Without an established set of habits, we could never do anything that actually counted as creatively novel.

    But flapping your hands about and appealing to the strength of some established communal belief shared by an audience just seems a bad faith attempt to dodge the serious epistemic questions at stake.
  • David Hume
    Apo must be having a bad day. It usually take three or four posts to goad him into an ad hom.Banno

    Bad day? Every laugh at your expense must surely be an entry in the credit column of the great ledger of life. And now you admit that your aim is to goad. Checkmate, mate.

    So, the necessary Banalities having been completed, let’s get back to you explaining to me how empirical correspondence operates in the absence of conceptual coherence.

    You prefer the one, and hope to avoid mentioning the other. But sadly, even induction relies on a coherent metaphysics back in the real world of pragmatic knowledge.

    And I just demonstrated that fact in mentioning the need to incorporate catastrophe theory into any full view of the probabilistic basis of reasonable inference. These days (well, for these past 40 years at least), you would need a positive reason to believe your phenomena actually inhabit a stable linear realm of constraints. You would need to know there was no parameter slowly creeping into critical territory.

    So how do you work that discovery into your own personal worldview I wonder? (Well not really, as it’s obvious.)
  • A Way to Solve the Hard Problem of Consciousness
    As far as the history of psychological theory, where have you seen accounts integrating the affective and the cognitive before 10 years ago, outside of a few fringe writers?Joshs

    Well my view is shaped by having being deeply concerned with the research into the question 30 years ago. So yes, there was a cogsci representationalism that was the mainstream at the time. And I was interested in the counter history of the more embodied and affective approaches.

    There were plenty around, I found. For instance the Soviet work on orienting responses that followed on from Pavlovian conditioning was very influential on me. But also, it would be fair to characterise it as fringe through the 1980s. And personally I was a little annoyed when a second rate hack like Damasio came along at the right moment to catch the eventual mainstream backlash against good old fashioned symbolic AI. :)

    But that aside, I’m finding your OP - especially given its contentious title and liberal name dropping - rather confused. If you have some particular thesis, it is lost on me.

    Perhaps you can have a go at clarifying how a physicalist conception of qualities says something about a physicalist conception of qualia. I fear that there is only some sleight of tongue at work here.

    You see, in my view, qualities are the general essences that science would name - the basic categories of substantial being like gravity, time, energy, entropy, work, etc. And then what makes that naming of entities actually scientific is they are able to be quantified in terms of measurements. We can relate time and energy as physical qualities because we also know how to measure them in terms of seconds and joules.

    So the dichotomy (or binary) of quality~quantity is about the general vs the particular, the general theoretical essence and the particular measurement framework which deals with its extension in a world (of time and spatial dimensionality).

    But qualia, from philosophy of mind, is something else. It is the atomisation of experience that just wants to reduce that general quality - experience - to a named variety of particular kinds or forms of experience. It builds in a Cartesian dualism by design. It is a way of talking meant to forever frustrate a deflationary neurocognitive approach to “the mind”.

    So really, it is a bit of a fraud. A cunning ruse. If you get folk talking seriously about qualia, they have already lost the battle against dualism. The whole Chalmerian enterprise was a rhetorical trick that derailed philosophy of mind in the mid 1990s (in my view).

    The better answer was the embodied cognition movement that then followed. But as I say, for me personally, that had already happened. Even early cog sci had an enactivist flavour with folk like Ulric Neisser. And you couldn’t get more mainstream than the “father of cog sci”. ;)

    Anyway, again you seem to be making some play on the notion of qualities and qualia. To me, this is where a general physicalist monism and a lingering Cartesian dualism are in fact in direct opposition to each other, and so not a point at which they could be joined. Perhaps you can clarify your thesis in this light.
  • David Hume
    I much prefer my question. One hopes not to need celestial mechanics and Bayesian Inference in order to plan one's breakfast with confidence.Banno

    Focus, Banno. Breathe deeply and focus. :)
  • David Hume
    Different question. I was emphasising what makes it reasonable to believe in causal continuity. You are now asking the empirical question of where is the counterfactual that would cause you to doubt that continuity in some particular circumstance.

    You also seem to miss the important point. We do know that catastrophes can befall even sunrises. Supernovas are just one such possibility.

    And yet, even then, we now have mathematical-strength accounts that make predictions even about such unpredictability.
  • David Hume
    This is just a failure of the atomistic paradigm, it does not refute the simple fact that effects are the result of causes.charleton

    So you have a simple deterministic account of the quantum eraser experiments that doesn’t involve retrocausality or some kind of outlandish multiverse metaphysics?

    Something has to give when faced with the evidence of quantum contextuality as a causal thing.
  • David Hume
    Why is it reasonably probable that the past predicts the future? Because the constraints or deep structures that generate patterns tend to have been built up bit by bit over a long history. For that historic weight of constraints to change, it seems probable that it would therefore have to be picked apart slowly in the same fashion - bit by bit.

    But also, we know from empirical observation of nature, and now logical models of that nature, that catastrophic collapse can occur. What took a long time to build up, can also collapse in sudden and predictably unpredictable fashion.

    So the world is actually far more interesting than Humean and Newtonian notions of determinism and probability could know.

    We have new models of probability - non-linear and chaotic ones - that change the good old Humean debate beyond recognition anyway ... even after we have abandoned rigid deduction in favour of Bayesian induction as an epistemic foundation.
  • Thoughts on Epistemology
    I find the very term "language game" rather strange. It apparently means nothing other than "set of restrictions". Still, it bothers me that there are people who choose to call it "language game" instead of something simpler such as "set of rules". There must be some kind of strange process going on behind the scenes. I am not following what is popular, apparently. Certainly not Wittgenstein's train of thought.Magnus Anderson

    Yeah. It is difficult to see what is considered special about this as it seems simply another restatement of pragmatist, or social constructionist, approaches to truth.

    But at heart, it seems to be driven by a particular philosophical disappointment. AP had the hope that a mathematical kind of mapping relation would apply to language use. The meaning of our words would not be some free act of interpretation - an inductive connection - but instead a rigid act of designation, or deductive kind of connection.

    So ordinary language, which seems sloppy, allusive, often ungrammatical and even paradoxical, would be shown to have a tight mathematical-strength logical structure as the sturdy skeleton supporting its spongy flesh. It should thus be possible to speak with complete and unambiguous precision.

    But language does not represent meaning in this kind of simple, dyadic, mapping fashion. Talk about language games and ways of life then become an examination of how we actually use language. And that leads towards a more complex triadic relation which involves a “self” in a sign relation with a “world”. Words are used to achieve purposes. And so to the degree that purposes are purposes we have in common as communities, meanings seem easy to share. Yet also, we all have more personal purposes, and so there grows that which can’t be so easily and closely shared.

    All this can seem like a game, a social game, as it makes meaning a matter of subtle negotiations. We are always engaged in interactions where the personal and the communal are in tension. So every sentence spoken and offered up for creative interpretation could be pulled either way.

    There are no grounding rules as such. Or none that anyone must stick to in some ultimate way. But there are general constraints at work - a deep structure of developed communal habit, like a well worn forest path - that we can detect and respond to in terms of our own agendas. We can either decide to stick as closely as possible to an inductive sense of the “standard meaning” as we can, or instead play the other game of stretching the interpretation creatively in the direction of our own interest or advantage.

    All this free play in the works completely destroys the hope of finding some inner skeleton of rigid and truth preserving connections in speech acts or propositional-sounding language. But on the other hand, language still works marvellously. It has an organic flexibility that is quite unlike the brittleness and mechanicalness of a rigid mapping operation - a computational kind of relation. And the secrets of that organicism are a pretty huge and important philosophical subject to explore.

    Sadly, all those who are fixated on Wittgenstein seem caught up in the tragedy that was AP’s great failed dream. They only want to talk about what turned out not to work for them, and so the complete abandonment of any grand projects.

    Meanwhile largely unnoticed, there always was the organicism project taking shape in the background. CS Peirce in particular had laid out a triadic semiotic which gets at the true deep structure of what is going on.
  • Thoughts on Epistemology
    So far as hands and chairs and capitals of France go, it doesn't do too bad a job.Banno

    So to the extent that we neither have to question ourselves nor our worlds to any degree, commonsensicalism works fine as an epistemology?

    It does appear that your philosophical ambitions are mightily limited.

    Yes. That's rather the point.Banno

    Nope. The point was that there are constitutive principles as well as regulative ones. And that your game of always diverting the conversation away from the natural or necessary to the artificial or arbitrary is a transparent gambit.
  • Thoughts on Epistemology
    The argument is, roughly, that in a given language game (and it is all language games), there are certain things that cannot sensibly be doubted. So in geometry the three angles of a triangle add to a straight angle and in Chess the bishop moves only diagonally.

    However, language games themselves are subject to change. So in some geometries the angles of a triangle add to more than a straight angle, in others to less; once the pawn could only move one square, but to speed the game this was changed to two squares for its initial move.

    In such cases it is very important to understand which game is being played.
    Banno

    As a point of interest, note how the quietist likes to confuse regulative and constitutive principles.

    Arbitrary human-invented rule-based systems like chess are a favourite as they clearly have the least metaphysical-strength necessity. A bishop travels the diagonal just because we agree to say so to get a game going.

    And then - in sly conflation - the same is suggested of geometric truths.

    If we say the world is flat - make that arbitrary constraint - then the three angles of a triangle sum to a rotation of exactly pi. But if we relax that rule about the world being flat, then - hey presto - we have a new game called non-Euclidean geometry.

    And I guess now we are not supposed to stop and ask what further games lie beyond curved metrics? We should continue to treat the situation as being as arbitrary as a game as chess?

    It is a transparent quietist gambit.

    Quickly - whenever the anti-metaphysicist senses that the removal of constraints is arriving at some fundamental limit - s/he will switch attention to the possibility of instead adding constraints in arbitrary-feeling fashion. As we get too near the bedrock of what might be constitutive, there is a bait and switch so we find ourselves safely talking about the merely regulative again.

    Hey, step away from that foundational vagueness! It can really mess with your mind. Step away from the abyss and think instead about all the arbitrary rules we can freely invent to create a metaphysics-free structure for our reality.

    We can understand our world in simple terms like the bishop that moves on the diagonal simply because that is a convention of a language game.
  • Thoughts on Epistemology
    One thing that I worth doubting is any theory that claims to provide an ultimate answer.Banno

    What about a theory that claims merely to provide a better answer?

    And indeed, a pragamatic theory of epistemology that says there are no ultimates as such, so a theory that would benefit from the self-endorsement of the very attitude it adopts?

    Perhaps that is one mistake. Another would be to suppose that there is an ultimate arbiter. The world is complex.Banno

    But you often speak as if you believe the world is the ultimate arbiter. Curious.

    Anyway, my pragmatic point is that the best epistemology has this self-limiting nature. Instead of taking the world as some brutely ultimate limit on inquiry, it sees inquiry as itself relative to a self-centred limit of interest. We enquire into the facts so far as they seem to matter to us. Beyond that, there is no particular point of view being served.

    It gives epistemology a self-grounding basis while also allowing it to grow or develop as large and complex as it likes. It is a closed and reliable system, yet also open and adaptive at the same time.

    And so far, I haven't seen your argument against it. That is probably because you are basically a pragmatist but hate to be identified as such. You have some strong bias towards a linguistic level of semiosis - an interest in "truth" as a language game - and so resist Peirce's more universal model of such games.

    As I've said, a problem there is that it tends to conflate ordinary language games with mathematico-logical ones. And that is a very big problem in the social history of AP. It has been a bad turn in philosophical thought.

    You want language games to stand for something epistemically generic. Yet you don't actually want to get forced too far from an ordinary language ontology. So your instinct for pragmatism lacks the semiotic machinery to cash itself out. It remains a vague gesture without the internal means to sustain a full theory as such. Therefore it has a considerable self-interest in decrying the very business of "having and arguing for an ultimate theory".

    The self-interest at the heart of philosophical quietism is pretty transparent. ;)

    There's something odd about thinking that one epistemological approach, perhaps one that looks good for science, will work in geometry; and organisational management; and ethics.Banno

    Odd? Or empirical evidence that it qualifies as being better?
  • A Way to Solve the Hard Problem of Consciousness
    The most radical implication of the new affective turn is that what has been considered unique to conscious subjects, the feeling of what it is like to be, the qualatiatice experience e of the world, is implied in all of what we call physical processes, not as one thing added on, but intrinsic to them. This is because in creating the abstractions that are so useful in the physical sciences, we don't recognize that qualitative transformation is intrinsic to, implied by all existents.Joshs

    ... swinging the pendulum in the other direction to simply say that all all physical processes have an experience-of-what-it-is-like is to make the same mistake from the opposite side of things...StreetlightX

    The "affective turn" is hardly revolutionary in the history of psychology and neurology. But yes, the mainstream ontology of our culture is mechanistic, and so an organic conception of things continues to struggle to break through.

    At the heart of the embodied/affective approach is the recognition that selfhood is itself a functional construct. It is a necessary part of the business of constructing "the world". We have to see the world from a point of view. So that is why there is "something it feels like what it is to be like". The brain is not merely modelling the world, it is modelling a self in its world. It is modelling a self for which a world exists - and exists in contrast to its desires, expectations, and possibilities.

    So it is an "easy problem", and not a hard one, to see why consciousness is imbued with a subjective sense of self. That is a functional information processing necessity. The modelling must include the weaving of the "persona" for whom the world is a point of view. Talk of affect is simply talk about a functional sense of self that is buried deep in the neurobiological design of the brain's evolved archictecture.

    As SX correctly says, there is no warrant from there to turn around and treat qualia as material properties - the panpsychic tendency.

    Instead, what it should rightfully do is call into question our belief that we know either the world, or this "self", in any direct and unmediated fashion.

    We have - for good functional reasons - constructed a model of reality that speaks dualistically to a concrete and objective physical world and an immaterial and subjective world-experiencing mind. And recognising that is the basic trick going on, it should call into question our deeply held convictions about both.

    So rather than simply conflating the two - arguing that mind must pervade all matter ... as if we actually know truly what either of those two things are - we need to step back to a further level of ontology that deals with the modelling relation itself.

    Instead of pan-psychism, that would be pan-semiosis. If we want to extend the neurocognitive revolution to physics in general, it would be the very thing of the subjective vs objective dichotomy that we would want to deconstruct in terms of sign relations.

    And physics has been doing just that anyway with its own information theoretic turn. The Cosmos is self-organising not because it is perfused by vague feeling but because it is structured by a dissipative purpose coupled to informational limits.
  • Thoughts on Epistemology
    Belief and doubt are complementary. You can't have one without the other as each is the ground of the other, and it is only together that you have any epistemology at all.

    The majority of the posts appear aimed at arguing for either one or the other as somehow foundational, indubitable, primary, or whatever. The usual monistic response when faced with a dialectical choice.

    But they are simply the opposing limits of the process of inquiry. And what matters is the way that they are balanced against each other.

    A systems science or organicist understanding of balance says a dichotomy is a symmetry breaking, and a metaphysically complete symmetry-breaking is an asymmetry. An asymmetry, in turn, is a hierarchical organisation - a breaking that is local~global in its organisation.

    So doubt and belief must be balanced in this fashion. Belief is the global or backdrop scale or epistemology - the broad and general ground of things not in doubt. While doubt is the local and particular scale - the various individual things which could be considered as failing to fit this background in some significant fashion.

    A well-organised mind would have this well-developed hierarchical arrangement. There would be a robust backdrop of habitual or ingrained belief. And against that, doubts would arise in highly focused and meaningful fashion. Doubt could not be a general activity. But it would be a useful localised activity.

    And again, belief and doubt would be just ideal limits, never absolute. A well-organised mind would simply approach those ideal limits by the end of its process of inquiry.

    And also, a further important pragmatic principle is that this "truth seeking" behaviour has to have a purpose if it is indeed going to be optimised by a complementary principle of unconcern.

    Again, a duality or dichotomy.

    To have knowledge that is meaningful - that speaks to some purpose - means that the knower also has to be able to discard noise. The mind has to be able to filter out all the possible facts, doubts, uncertainties or unknowns that are the differences which make no difference ... to "it".

    So meaningful knowledge is self-centred. The autonomous self arises - in contrast to the world in which it exists - to the degree it can effectively ignore that world in pursuit of its wishes.

    It is fundamental to a pragmatic epistemology - the one that recognises selfhood to be a further epistemic constraint on knowledge - that this self gets to determine where to draw its own boundary of indifference. It is not a bug but the feature that this self can be indifferent to localised doubting - whether that is seemingly justified or unjustified.

    The mistake is to think that the world is the ultimate arbiter. Out there, the actual truth of the matter lies.

    That may be so, but first there has to be a genuine reason to care. Doubt only comes into play if a difference would make a difference. To some purpose. And hence the "self" that such intentionality would represent.
  • Self-Identity
    For example, if one claims to be lazy and stupid, but is actual hard-working and highly intelligent by that culture's general standards, then does one lie to oneself or is it an internal conflict with what actually constructs the definition of lazy and stupid?Lone Wolf

    It helps to see that this linguistic self is a social construction. And so the words society creates as descriptors are those that it would have us apply in our self regulatory behaviour on its behalf. These are the social judgements by which we are meant to judge ourselves, and in so doing, create that very self.

    Society has some theory about what individuals ideally ought to be. And we learn the habit of constantly measuring all our behaviour against that. You are describing that self judgement here.

    More than that, you are highlighting the pretty harsh and open ended standards that are characteristic of a modern developed nation state of mind. It is never good enough for a self actualising individual to be merely average. One must be transcendent. :)
  • Science is just a re-branding of logic
    which, being a pragmatist, doesn't bother him.andrewk

    But Hume represents the nominalist turn of thought. He was not a pragmatist in the sense of arguing for the reality of the general or universal. He was an atomist in regards to empirical sense data. So his epistemology reflects a particular brand of metaphysics.
  • Is Logic "Fundamental" to Reality?
    ...logic apparently was "inescapable" because reality is logical...Usually this is described logic being part of the "fabric" of reality (whatever that means).MindForged

    Isn't this confusing logic and causality, strictly speaking? Of course, the two are related.

    We think of reality as being fundamentally reasonable or intelligible because there are certain emergent structural truths that appear to have the force of rational necessity.

    This is how we reacted to the early discoveries of maths. Behind the accidents of the material world there was another world of inevitable formal necessities. Mathematical forms you could not escape as an ideal limit on being.

    Eventually this did lead to mathematical logic - the "geometry" of computational, permutational or deductive form. And those syntactic shapes appear to be reflected in the material operations of the actual world. They seem to encode something about natural causality.

    So there is a relation between logic and causality. But it remains a weakly expressed one. More work would need to be done to show if logic in fact describes natural necessity.

    This is a live debate. Some folk simply presume Turing Universal Computation proves the physical world to be computable. One kind of mathematical model speaks to the true causal structure of existence.

    But anyway, my point is that it is the causal structure of the material world that is the target here. And the mathematics of logic seem our best models of that. So it is easy to make the step of claiming reality is actually a product of logical necessity.

    There certainly seems something in that line of thought. But also a lot of potential pitfalls to address.

    But then I suppose this gets us back to the issue with there being all sorts of different algebraic logics (Boolean algebra, Heytin algebra, etc.), and we even know that some Non-Classical Logics can be constructed purely within their own meta-theory (e.g. Paraconsistent semantics).MindForged

    Yeah. And all these also presume some shared metaphysics. They presume an atomism about reality. So they really only can address material and efficient cause. They struggle to address formal and final cause.

    So if you believe Aristotle - reality is a system involving all four causes - then you can see why mainstream logics, in being atomistic rather than holistic, might struggle to give a full account of the causal structure of reality. You can see the major problem that arises.

    I'd mention ontic structural realism here. It leverages the maths of permutation symmetry and symmetry-breaking. Fundamental physics has show how that is the maths that best describes the logic/causality of the Big Bang universe.

    So there is a connection to be made for sure. Our theories of mathematical necessity would seem to model the fundamental structure of existence in a way that makes its causal organisation seem completely reasonable or intelligible. We are getting there - with traditional logics perhaps having far less to do with the holistic picture than folk were expecting.
  • Science is just a re-branding of logic
    Or, taking Peirce's alternative, if we adopt the premise that "nature takes habits", we can deduce that it is most likely that the Sun will rise tomorrow, unless some greater unforeseen habit of nature intervenes.Janus

    This is important as Peirce is giving an actual reason for why induction is something that strengthens with time. A constraints-based view of the world says induction should become ever more reliable because a reasonable habit will keep growing stronger in being reinforced by its own success.

    That is how science works. Our conviction strengthens as a belief survives challenge to its applicability.

    And that is how the solar system works. In its early days, the sun came up everyday on the earth in a more unreliable manner. It took a while for a ball of debris to even accumulate into a planet. The early solar system was fraught with broken up junk that could have smashed into and derailed the earth, ending any nascent habit of a daily dawning of the sun.

    But over time, the solar system got cleared up of all the junk, all the chaos, and settled down into a long-term groove. The inductive grounds of a belief became ever firmer.

    So while there is nothing absolute to warrant that the past predicts the future, a constraints-based view of causality makes it deductively reasonable that regularity develops over time. Habits want to emerge. Order predicts not just order, but increasing order. Constraints develop a weight that make it increasingly hard for individual accidents to derail.

    So the principle of induction is - as Peirce put it - about the taking of habits.

    The Humean view arises from imagining a reality without real interactions. It is a Newtonian paradigm where everything reduces to local accidents - random collisions. Of course, in a world imagined like that, you would expect there to be no gathering history, no developing state of generalised coherence. If everything is imagined as fundamentally random and memory-less, then of course the deductive consequence - Hume's argument - is that even the laws of nature might change for no reason at any time.

    But once you have a metaphysics which can take account of interactions - see how that requires a generalised coherence to emerge just due to "randomness" - then you will deduce something quite different about nature. You will have a different model-theoretic view to test by observation.

    So really Hume is advancing a metaphysics-based hypothesis - and one that is believed due to Newtonian science. That is what gives it any credence it might have.

    However, the "shock" is that this Newtonian causality just isn't what we observe in nature - on the whole. Instead we see a world where interactions result in a generalised state of coherence. Constraints or habits inevitably - and logically! (we can do the maths of self-organisation!) - must emerge to bring predictable and increasing order to their "worlds".

    Hume had the right argument for the wrong metaphysics. And physics has since moved on as well.

    Gosh it's like someome here has never read Hume before.StreetlightX

    Ah. Hear the plaintiff squeak of someone who has never stopped to truly consider what Humean doubt is about. Such a big difference between reading about something and thinking about something.
  • Thoughts on Epistemology
    You cannot genuinely (coherently and consistently) doubt that there any 'thises' because to do so would undermine the coherence of all and any discourse.Janus

    Yep. It is on the whole that it rings true. We believe in the world as a generality.
  • Thoughts on Epistemology
    First of all, I was talking about the relationship between certainty, certitude, doubt, and mistake. I don't see how "constraints" is relevantMetaphysician Undercover

    Belief is a constraint on doubt. Doubts are always possible to manufacture on some grounds. So belief simply aims to constrain doubt to a reasonable degree.

    You are taking some absolutist position. The only position that works is a relativist one.

    Secondly, to say that a free choice decision by a human being is limited to a difference which doesn't make a difference, is clearly wrong, because then we wouldn't have to think about any of our decisions, because they wouldn't make a significant difference.Metaphysician Undercover

    I was talking about the freedoms of the world, not human freewill.

    So a cat may have a chewed ear and yet still function as a cat. The chewed ear is a difference that doesn't make a difference.

    I then enter that cat in the cat show. Now the chewed ear is a difference that makes a difference.

    So as you have previously argued, everything in the world is individual. Even two identical things are located at individual points of spacetime. But categorical beliefs are about generalities - what things have in common that make them "the same". And so such beliefs have to also know how much actual difference can be ignored as being differences that are insignificant or unsurprising.

    When is a mistake a mistake? When it is a significant difference to what was predicted.
  • Thoughts on Epistemology
    What you say doesn't make sense. You are claiming that the possibility of mistake is not grounds for questioning a belief.Metaphysician Undercover

    You misunderstand the nature of constraints. The free actions of the world are only limited to some threshold variety of differences that don’t make a difference. So it is the probabilistic view built into science. No two events are the same. But the question is whether they are similar enough? Is the variety essentially random rather than significant, that is due to some further undiagnosed cause?

    So our beliefs are generalities that predict an acceptable range of outcomes. A mistake would be when instead we find evidence of some further causal mechanism that says something more that normal levels of chance are at play and we need a generalisation that makes better predictions.
  • Thoughts on Epistemology
    You’re still talking engineering.Wayfarer

    Always better than mystical bollocks.
  • Science is just a re-branding of logic
    So your claim is now about past vs future tense and not past vs present tense. Do you blame me for feeling confused. Especially when you just won’t correct what you said.

    But again, if we are now in the future that acceptance of induction predicted, then that is inductive confirmation of a principle. We decided to use the principle and now we can see how well it has worked. So it would be matchingly unreasonable to now drop the principle. It’s converse lacks any empirical support and only has empirical falsification.

    So sure, induction says we can’t know that the past predicts the future. But when we speak to the principle itself, it has got a track record that makes that a reasonable bet.

    We act according to our nature, which is to assume the principle of induction, without wasting time futilely seeking a warrant for the assumption.andrewk

    It’s hardly futile if we have a history of evidence. Again, the issue is not whether things are certain but whether there are good reasons to continue to hold a principle. And the evidence weighs heavily here for the principle and not its contrary, or even a null hypothesis.
  • Science is just a re-branding of logic
    You said the difference in tense between past and present was crucial - between worked and works.

    So what was that about?