Comments

  • Object Recognition
    What my charge would amount to is an invitation to see how the intrinsic CONTENT of scientific theory changes, including how the RESEARCH is conducted and interpreted, as a direct result of a shift in metaphysics or philosophy of science.Joshs

    Agreed! Though if I said this I would probably have only said "theoretical framework" instead of metaphysics.

    All accepted science works, but changes in the metaphysics of science leads to changes in our understanding of HOW it works, and as a consequence de leads to fresh concepts.Joshs

    Also agreed, though again I wouldn't have reached for metaphysics.

    So let's talk about that. I think we are both committed to a view of science evolving and changing, and I that's roughly why I think of pragmatism as most clearly expressing the spirit of science. There's some confusion possible because there's presumably a hierarchy here, with the predictions of research down near the bottom, very changeable, theoretical frameworks above that, somewhat less changeable, and maybe way up at the top something like metaphysics.

    Some of the constraints from above on what happens below are clear enough, as in the way theory guides experiment design. But those aren't absolute, because theory doesn't get to determine the experimental result, that's what the lab and the field are for.

    But to some degree what's above theory does determine the result, in the sense that it guides interpretation, and that in the two ways I think you were referring to: something like metaphysics which guides the interpretation of any theory-and-research program, what it all amounts to, what sort of thing you learn when you learn something by doing science; and something like philosophy of science, which guides decisions about whether and how experimental results count as evidence for theories.

    It's clear enough how the latter can constrain practice, or not, but I'm not as clear on how the 'metaphysical' arm does. If it doesn't cash out as a change in methodology, or in how theories are judged, then it seems like it might be possible to swap out the metaphysics without too much change in practice. BUT, as you point out, how research already done is understood, and how research to be done is undertaken might change considerably.

    The important thing to me is that change at the various levels here is always a live option, and I think this is the pattern that pragmatism spots, so there's no reason to be wedded even to your top-level constraints of the moment.

    What threw me about the way you were putting this earlier was that it sounded to me like the important thing to you was picking the right metaphysics, the one that jibes with your philosophical views, which is why I referenced Lysenkoism. I don't see it that way, obviously.

    I doubt my little sketch here is perfectly satisfactory to you, but I still think there's broad agreement.

    It shouldn’t be ashamed since they are intertwined aspects of the same company. My point is the ways philosophy and science are different is much less significant than you think they are, such that it is silly to even try to distinguish the domain of philosophy proper from science proper, other than as a matter of the conventionality and generality of the vocabulary.Joshs

    Ah, okay, that's a funny thing, because I have been exactly questioning that sort of boundary policing. I think they should be taken as continuous. So here again you and I are on the same page, or same enough we can talk.

    I've been trying to undermine @Antony Nickles's claim that science should get off philosophy's lawn. My talking up the virtue of science was not to cordon off philosophy as its unworthy cousin, but to convince philosophy to accept science as kin.
  • Object Recognition
    Oh, there’s correspondence there all right.Joshs

    I mean, I get that "correspondence" is like a swear word for you, but you're just making stuff up.

    It may be in the form of indirect modeling, but there is something in your notion of scientific observation and measurement that keeps science apart from the humanities and other areas of cultural creativity, and I think it has to do with how science gets a grip on the real.Joshs

    What notions? Where did I talk about observation and measurement? You're just projecting, which -- it's just weird. Do you need me for this conversation?

    Here's the thing. I could explain what my actual conception of science is (communal, pragmatic, predictive, sensitive to feedback, self updating, blah blah blah), thus defending myself against your charge of philosophical sin. But I don't have to.

    What does your "correspondence" charge amount to? Suppose it's true and "correspondence" is inscribed in the Great Book of How to Do Science and What It Really Means. Then you could object that correspondence to the real is -- what? Is refuted? Is bad? Is a discredited metaphysics? Is problematic? Should science care? If it works, it works. You can stand outside all day shouting, "This whole enterprise is a farce! They believe in correspondence to the real, those scientists!" No scientist will care. No one else will either. You will maintain your philosophical purity, as you understand it, but so what? Science will go on doing what it does.

    Which is of course the point. Science is successful. Art is also successful. Literature is successful. History is successful. All of them in different ways, and it's no knock on art or literature or history that they are not science. Science is also only what it is. Is philosophy successful? I think most people feel it's a little harder to say whether it is -- but it's easy if you count spawning the natural sciences as part of the history of philosophy, because philosophy ought to be proud of that.

    I still don't see any good reason for philosophy to be ashamed to be seen in the company of science.
  • Object Recognition
    It sounds like science to you is tied to a notion of correspondence between scientific observer and observed realityJoshs

    Nope. You got all of that just out of me using the word "research"? Geez.

    You then have pragmatism arising out of the new scientific spirit of inquiry where the mind is all about modelling, habits and judgements - constrained by the fact of being in the world rather than being remarkable for standing apart from that world.apokrisis

    Exactly. I can't speak to Peirce, but it takes no more than a page of Dewey or James to see this. I mean, James literally wrote the book on psychology. His career is physiology > psychology > philosophy. I don't how much more obvious this can be.

    And a quick reminder that the full title of Hume's book is A Treatise of Human Nature: Being an Attempt to Introduce the Experimental Method of Reasoning into Moral Subjects.

    The pragmatists I'm reading may differ from Hume in where they land on particular issues, but it's the same spirit, and I see no reason for work undertaken in the spirit of science to be discontinuous from other work undertaken in that spirit.
  • Object Recognition


    We know what picking the results of research ahead of time looks like, and it's not the same thing as working within an existing paradigm.

    Inspiration can in theory come from anywhere. In practice, it often comes from a handful of prophetic thinkers who had to wait decades before the larger culture was ready to embrace their ideas.Joshs

    I wouldn't even describe Darwin this way, so we're just not talking about the same world. Lots of people have interesting ideas, it's the research that matters. It's the research supporting and extending Darwin's insights that makes his ideas matter. Picking ideas you like -- well, we all do that, but that's not science.
  • Why should we talk about the history of ideas?


    Thank goodness! Yes that makes perfect sense, and I see now you were filling in a possible motivation.

    I was deeply confused. Apologies if I misread you.
  • Object Recognition
    Such a search will reveal philosophically reformulated notions of brain, body , language and culture that are much more compatible with WittgensteinJoshs

    I'm right now reading a book of psychology that I would argue is in some clear ways compatible with the later Wittgenstein.

    On the other hand, who cares? Wittgenstein is interesting, but aligning your theory with Wittgenstein or with any philosopher really should not be a goal of any scientific research program.

    Inspiration taken from Wittgenstein? Absolutely. But inspiration can come from absolutely anywhere and ought not guide you toward a particular result.
  • Why should we talk about the history of ideas?
    seems like it stems from the common tendency to conflate "truth" and "objectivity."Count Timothy von Icarus

    I have no idea how you got that out of what I wrote or what you quoted. I was making very close to the opposite point, that you need to commit to an interpretative presentation for the history lesson to hook into a larger argument. Reciting only facts leaves out how those facts contribute to the argument and why what they contribute matters (unless that's clarified elsewhere, obviously). It turns reasons into non-sequiturs.

    Here, for free I'll give you another reason you might not express explicitly what makes particular facts relevant to the case you're making: they're not. This can be play out a couple ways but the result is the same: the connection between the facts recited and the point you're making doesn't show up because there isn't one.
  • Masculinity
    Btw, a little cluster of articles about masculinity in American politics recently dropped over at Politico.
  • Masculinity
    my guess is that masculinity probably isn't related to where we landedMoliere

    Maybe you're right, and maybe bullying is just one style of a manipulation among others. The ones that matter here lean heavily on devaluing the target. It looks a bit like shaming, but the twitter threads I've looked at did not seem to be shaming as a means to get someone to change, but as a means to get them to shut up, to take away their power and their agency. It's not you ought to reconsider your life; it's you need to understand that you're a piece of shit who doesn't deserve to speak. It's the sort of thing abusive husbands say. Manipulation on kickass 'roids. All of which is why it struck me as bullying, and as the kind of manipulation we associate, for very good reason, with less than admirable men.
  • Object Recognition
    And when I distinguish an object to someone, and they ask how I am distinguishing that object (in what way am I distinguishing it, by what features or attributes), no one ever explains a process of the brain.Antony Nickles

    This happen to you a lot, someone asks you, "By what features or attributes are you distinguishing that object?"?

    For one thing, "feature" in everyday life is (a) a salesman's word, (b) refers to a guest vocal by a hip-hop artist, or (c), rarely nowadays with the disappearance of shorts and doubles, a movie. But that's not how you're using it; you're using it as a bit of philosophical shop-talk. "Attribute"? "Object"? Hardly used by ordinary people at all. I've been asked whether I think a pair of pants is green or grey, but I wasn't asked what color I would attribute to them, or by what features I distinguished the pants as an object. If you have these sorts of conversations on the regular, your life is very different from mine.

    The question that @NotAristotle asked was clearly a question looking for a scientific explanation that would involve processes of the brain. He could not have been clearer. The answers he received referred to gestalt principles (@Pantagruel), neural networks (@wonderer1), the predictive and inferential nature of perception (@apokrisis), the involuntariness of object perception (@L'éléphant), pattern recognition (@DingoJones), survival value so that natural selection can kick in (@litewave), and how object perception arrives very early in our lives (me). Those are all important pieces of a very complicated answer.

    Only you told him, don't do that, this is a philosophical issue, and therefore a question about our ordinary criteria for objects; turning to science is just your desire for certainty, your fear of uncertainty, and an evasion of responsibility. And then @Joshs showed up to throw Husserl at me, god love him (and oddly choosing not to mention that Merleau-Ponty references early gestalt psychology).

    I would've given odds that you would say that, but no one who's read any of your posts would have bet against. The odds were also pretty high that @wonderer1 was going to say something about neural networks, that I was going to say something about learning and "go read some more", that @Joshs was going to question the foundations of science, and that @apokrisis was going to say something a lot like what he did, which most of us can recognize but not reproduce (although I did know that if he answered this one, the word "crisp" would be there), showing that there is a hierarchy of issues involved and how those fit together.

    We're all pretty predictable. It might all seem very interesting to @NotAristotle, but it'll seem less interesting the fourth time he asks a question and gets exactly these answers for the fourth time.

    Most of the sciency posts here were more or less explicitly partial answers, with the exception of apo's, because apo doesn't do partial.

    Only you and then @Joshs argued that science is more or less irrelevant here, @Joshs because science itself has foundations that are, I guess, metaphysical, you, I think, because nothing could be a 'higher court of appeal' than our practices. @Joshs accepts some sort of continuity between philosophy and psychology, but on the grounds (I guess) that psychology is more philosophical than it lets on. You do not. I'm not clear whether you have a critique of science in mind, or only of the reliance on science when doing philosophy or perhaps also in everyday conversation.

    I would defend the use of science in philosophy this way: we begin in the middle, with conceptions we only know through our everyday reliance on them, scientists being just like other people; we investigate ourselves and our environment relying on those conceptions but without assuming they are the last word, that we already know everything that can be knowed. It will frequently turn out to be the case that our everyday conceptions are inadequate for understanding what we find, even misleading, but we can also come to understand why we have come to conceive of things as we ordinarily do. Why, for instance, we perceive a world of objects, to use a philosopher's word. What science helps you resist is the elevation of your everyday understanding into a theory, which is the philosopher's game. Scientists, not philosophers, "leave everything as it is": of course you perceive tables as solid, here's why, and here's why "solid" can't mean what you might reflectively think it means, but your sense that there's a difference between room-temperature wood and room-temperature water is right, and here's more why.

    Philosophers used to talk about stuff like this but they don't anymore for the simple reason that science does it better. They know it. Ordinary people know it. @NotAristotle knows enough to know it's science that will answer his question, not philosophy.

    Having been painted into an increasingly small corner by science, some philosophers have gotten their own paint to mark the line that science Shall Not Cross. Only Philosophers Allowed. You can keep all the territory you've conquered (an amusing gesture, considering there's no way philosophy is getting any of that back), but no more! What's left in this corner? Metaphysics? Probably not for long. Transcendental arguments? They fit the bill, apparently, but the need for the transcendental move is premised on a deeply flawed psychology, folk psychology elevated to theory. Language? The same transcendental issue. No, it's become pretty clear that philosophy is (at best) a methodology in search of a domain. (@plaque flag) Unless you're apo, and your domain is everything.

    I've been on the other side of this argument as @Isaac could attest. I've tried defending the specialness of philosophy. I think there's still some room for stuff that science isn't quite suited to or that it doesn't bother with, but I'm through chasing science off my lawn. I think it's a betrayal of the spirit of philosophy and resentment of the success of science.

    What's your take? I'm asking.
  • Masculinity
    we're dealing with the personal psychology of J.K. Rowling and whether or not that is a good psychology or if she is a bad person, and this is why she's good/bad, and if you do/not like her then you're also good/bad. There is no demand hooked to the decision which can be debated. It's her, and reflectively our own, moral character that's at stake.Moliere

    Certainly there are people who see this as the job, sorting people into good and bad.

    Some people feel it's important -- and their job -- to sort art into good and bad. To sort artists into good and bad.

    It's related to something I have found very peculiar about the reading habits of my millennial and gen-z friends and coworkers, that the first thing it occurs to them to say is, I liked this character and I disliked this other character. I didn't grow up reading novels that way, and I don't understand how this happened or why people do it.
  • Object Recognition
    When I say certainty I only mean predictable, repeatable, knowable, etc., which are the criteria for the conclusions of the scientific method. EDIT: most importantly here is that it does not matter which (competent) person does the experiment to reach the same conclusion.Antony Nickles

    But those are all good things, and the best way to fight off dogmatic certainty. Over here in the science-friendly world we say "Better a question that can't be answered than an answer that can't be questioned." (Feynman, though maybe not originally, I dunno.)

    Science can never be a dogma-free zone. It can be a practice that is self-aware with regard to its reliance on guiding presuppositionsJoshs

    I think that's abusing the word "dogma". Of course there are working assumptions, all of those things you mentioned, but in the long run any of them can change. In the right circumstances, even what constitutes an experiment can be up for grabs. Science is how we fight entrenched certainties and dogmas.

    And the exact point is that if we are talking about brain processes, we’re not talking about mistakes and excuses and responsibility because of the desire to make our interplay with objects pure instead of muddled with those considerations and our relation to others.Antony Nickles

    There is plenty of room for all that, while still understanding where we're starting from, where we are all starting from insofar as all of our brains handle distinguishing objects, some basic physics, color constancy of objects and loads of other things in roughly the same way so long there's no damage or inhibited development. Rather than denying our responsibility for what we do with these capabilities, it provides the ground we stand on when we have those discussions.

    focusing on our biological relationship to objects is fine (it’s not wrong), but only, it’s trying to answer a question that philosophy has misconstrued out of fear and desireAntony Nickles

    There may be something to that, but our very unconscious construction of objects is also and unavoidably driven by fear and desire, fear of what might harm us, desire for what sustains us. Just the nature of being an organism. My version, instead of telling people they need (philosophical) therapy, just shows them why they were so puzzled, shows the fly the way out of the fly-bottle. Hume would have been deeply gratified, as he would have had he lived to read On the Origin of Species, some of which he also intuited. (Darwin had the benefit of reading Hume, and did.)
  • Masculinity
    I'm still looking for the loop backMoliere

    There's some stuff here. One is that the behavior we all deplore -- because Street's gone, so there's no one to take the other side -- among certain groups of young progressives has a name: bullying. So there are some cultural paradoxes about masculinity threaded through all this.

    Harry Potter, and the fate of its ("its" because I don't mean him but the world and the stories) creator, is the big case in point. On the one hand, Harry Potter was deeply meaningful to a great number of young people because part of the message was, it's okay to be a weirdo, it's okay to be different, it might even -- ugly-duckling style -- be wonderful to be different. Queer kids everywhere got the message.

    But with time it's been possible to take stock of Harry Potter, and more people have done so. (My son has pointed me to some interesting video essays re-appraising HP.) Even early-ish on, Ursula K. Le Guin was asked about it and commented that it seemed to her pretty derivative and "somewhat mean-spirited." This is the point a number of people have been converging on: it is mean-spirited, and in fact the whole thing looks a bit of a revenge fantasy. The queer kids, the theater kids, the weirdos, they get bullied a lot (as Harry does by the Dursleys) but secretly they're the ones with the real power, power you can't imagine, and when it's their turn, you're gonna pay.

    When Jo Rowling shot her mouth off, a generation of readers of put into practice what she'd taught them: make her pay.

    If Street were here, he'd argue that this isn't violence, it's counter-violence. Fuck Jo Rowling. She's got it coming. But to the rest of us, this looks like the same old tragic tale of the revolution adopting the means of the oppressors they overthrew.

    And in this case, the means are unmistakably associated with toxic masculinity: it's bullying. The desire for revenge is understandable but besides being wrong, it's a mistake. It retrenches the technology of power you were trying to undo. What's wrong with toxic masculinity is not that it's abuse of women by men, but that it's abuse at all, and that ought to be obvious because asshole men are more than happy to abuse other men. (It's why the conclusion to the British Office -- as we call it over here -- is so satisfying, when Brent stands up to his bully. That's not a blow for cis-white-men, it's a blow against bullies.)

    What comes out of the superhero stuff is issues about what the favored term is. Because of the holocaust, there was a lot of talk for a while about how dangerous it is to dehumanize the other, and that's true so far as it goes. What struck me -- and why that story stuck with me all these years -- is that the how-to-make-a-torturer story takes the torturer as the favored term, not the victim. Everyone not the torturer is a potential victim.

    For whatever reason, the current state of play takes victim as the favored term, and everyone not a victim is (potentially?) equivalent to the torturer. And the trouble with that turns out to be that you miss the opportunity to mark characteristic behavior of the villain as what makes him the villain. You the victim have no problem becoming the bully in turn because the problem with the world was only that you were being bullied, not that there were such things as bullies at all.
  • Masculinity
    But your notion of superheroes would make sense of righteous fury.Moliere

    I don't know if this holds up, and these days I would associate it with the Stanford prison experiment (which I didn't know about when I heard this) for good or ill, but I remember clearly an explanation of how you create torturers -- or, more broadly, that sort of personal guard or secret police, the trusted elite troops of the dictator.

    The main idea is that it's not about dehumanizing the enemy. What you actually need to do is convince these young men that they are special, that they are the real defenders of the nation, uniquely qualified to do this important work. You build them up so that they feel they are above everyone except their master. That's why it is as easy for them to torture collaborators, their own countrymen, even their own friends and family, as it is to torture the foreign spy or soldier, the rebel, the ethnic other. None of those distinctions matter to them. Whatever your status or position is, theirs is higher, in their eyes. You convince them that they are in effect superhuman, and that's why they are beyond good and evil, and needn't concern themselves with questions like whether what they're doing is moral. That's what lesser beings worry about.

    Now, I don't spend any time on Twitter or the like. But I hear things, and read thoughtful highbrow articles about what goes on over there. Any sense that some of these folks have taken such a view of themselves? The right still seems overwhelmingly driven by grievance and resentment. That's not my impression of the left. They do seem a bit more like avenging angels, meting out justice with an unforgiving certainty people used to be much more hesitant to claim for themselves --- unless someone went to the trouble of teaching them.
  • Why should we talk about the history of ideas?


    A surprising answer! Good for you.

    Nietzsche has that line, "Most philosophers are bad writers because they show you not only their thought, but the thinking of their thought."

    He might have been bullshitting though.

    I'll say this though: the style you're describing can be perfectly appropriate as a pedagogical tool, and it's one reason what sprung to (your) mind was wisdom traditions, where someone is definitely the master or teacher and someone else is the student.

    That is not appropriate here, where we are all collaborators, and that's why you won't find this kind of thing in science either. What we do is a cooperative venture. There are no masters here to gnomically bring us to enlightenment.
  • Object Recognition
    it does appear that objects are by and large constructed within the brain, without our awareness, — Srap Tasmaner

    What we want with this picture is to understand seeing and identification of objects without our participation in the process. The chance of error previously led philosophy to create the idea of “appearances” (compared to something more “real” or certain). The current fascination with brain processes comes from the same desire.
    Antony Nickles

    I think you're on the wrong track here.

    (1) Science is not the land of certainty. People talk this way sometimes, sure, even scientists, but when it comes down to it, science is a dogma-free zone. So if you're looking for certainty, it's religion you want, not science.

    (2) No description of what the brain does concludes, "And this is how your brain allows you to know for certain that ..." It's similar to (1). Those perceptual processes we're unaware of, they do not provide some faithful reproduction of our environment, but useful working predictions. There are well-known ways -- various optical illusions, in particular -- in which if you think that's what you're getting, what you actually get will be awfully confusing.

    (3) There have been persistent puzzles in Western philosophy that I believe largely stem from not having the concept of unconscious mental processes. Hume seemed to intuit some of this, in seeing that reason alone cannot account for our understanding of objects, causality, and so on, and yet finding that he experiences a world of objects and causal events. Objects, for instance, are given for us, because we do not in fact have conscious access to the "raw data" our senses take in -- by the time there's something we can be aware of, it's already been constructed as an object.

    Our usual “unawareness” of these acts are because we are so trained in them we handle everything effortlesslyAntony Nickles

    Depends a little on what you mean by "trained", but no this is just not what the research in developmental psychology looks like. The physics of objects begins showing up at less than six months old, practically as early as we can devise tests for it. If by training you had in mind some kind of social convention, that's just not it.

    And it looks like we are not aware of how some of the basic building blocks of the world are put together for us because we cannot be. The connections aren't there. It may present a bit like a habitual activity that you can perform "on automatic", without thinking, but there are things that you were never thinking, not consciously.

    For example: I point out an object you had no awareness of and you “construct” it into your world in learning to identify and differentiate it, learn where to find it, etc. In an actual sense, your unawareness of it as a separate distinct object means it does not exist (for you), as you have no reasons for it to matter, no criteria of our reasons to be interested in it. Basically, the brain’s activity during all this is not critical to, nor does it illuminate, the philosophical issues involved.Antony Nickles

    I don't see what philosophy has to gain by walling itself off from science.

    What you give here is a description, and there are always alternative descriptions of phenomena possible, relying on differing frameworks, some more illuminating than others. Philosophy can sometimes do this really well, and there's value in that.

    Science is something else altogether, not just description but explanation. They're not in competition.
  • Why should we talk about the history of ideas?
    I also think people like historical narratives of how science, math, etc. develop because we are innately geared towards remembering people, conflicts between people, social interactions, etc. as a social species.Count Timothy von Icarus

    Yeah I'm not contesting any of that.

    Let's put it another way: suppose you're making some argument and you have in mind a particular interpretation of the 60s that would support your claim; but instead of presenting that version, you present a scrupulously neutral presentation of the 60s at the point where your tendentious interpretation would hook into the larger argument you're making. The reader either gets what you're (not) getting at or they don't.

    But what you've done is suppress your reason for referring to the 60s at all by moving to the scrupulously neutral version, and you've done this instead of just not reaching for the 60s in making your argument. You're trying to have your cake and eat it too, and violating Grice's maxims. It's not about whether the point you're making is persuasive or worth considering or 'legitimate' in some sense; it's the roundabout way of (not) making the point that is at issue.

    I do the same thing you did, where you suggested that 'an argument could be made ...' I use that one. I also use 'Some might argue ...' I think that's acceptable when it's really and truly not my position but a position I want to talk about or I want someone else to talk about. (I tried it several times in this thread.) I also use that when I'm not sure what to say about it except that it's a position that occurred to me is possible.

    But it might be a habit worth breaking, or at least it might be better just to directly say what those little phrases are standing in for. I think I'm happier when I just say things like "I think there are three options here..." and then lay them out. No confusion there about whether I'm advocating in a deniable way, etc.
  • Why should we talk about the history of ideas?
    That said, I don't think this leaves us unable to analyze intellectual history at all. We can observe that Renaissance thinkers "rediscovered," classical culture in an important way. We can spot major swings in US culture when comparing the 1950s and 1960s and be quite confident in describing real differences in trends.Count Timothy von Icarus

    Okay this is the perfect example.

    What will you say about the difference between the 50s and the 60s? Let's say this comes up in a discussion here on the forum, and broad strokes are acceptable. You want to describe the difference, how will you do that? What words will be in your description?

    There are to start with the two obviously opposing views, which I won't rehash in any detail. The 60s was either a time of liberation or of everything going to hell. But suppose you don't want to say either of those because you're doing philosophy, you're being scrupulous, you don't want to rely on an explicitly tendentious description of the 60s, so what will you say instead?

    You might just state some facts, by "facts" here meaning statements about the 60s you assume will be for the most part uncontroversial. More young people read Marx than in previous generations, and more claimed to have read Marx. Young people in considerable numbers publicly protested many government policies relating to the war in Vietnam. Many people protested racialized laws and police practices especially in segregated Southern states and large cities throughout the country. Blah blah blah. We're going to aim for neutrality here.

    My issue is not to nitpick over how neutral you can manage to be, but this: the more neutral you manage to be, the less likely it is that what you say has any direct connection to the larger argument you're making. That won't be true for all cases, obviously; if someone claims to prove that young people have never taken to the streets, that argument lands on a factual claim which can be refuted with a counterexample.

    But the cases I was interested in look more like this:

    A: The Industrial Revolution was a mistake.
    B: Why do you say that?
    A: The steam engine was invented in the 18th century and the first commercial use of Watt's improved design was in 1776 by an ironworks...

    Etc etc etc.

    By being scrupulously neutral in your description of history, you force the reader to 'connect the dots', to figure out what inferences you intend them to make. In a case like this it's obvious there's some connection, and depending on the rest of the paragraph many more connections might be implied or inferred, but none of that is actually stated. This is exactly the point I was addressing in the OP.

    So to make the point you're making clear, in many cases, you'll have to give up on this scrupulous neutrality and give in to being at least a little tendentious. In a lot of cases. Where you only need facts to support your argument, no. But if you need something taken as something for it to hook into your argument or your claim explicitly, for it to be anything more than obiter dicta, then you're down in the trenches offering an interpretation.
  • Evolutionary Psychology- What are people's views on it?


    I'll go and read the IEP article, thanks. It looks better than the New Yorker piece.

    My first reaction to what you've quoted is that this a damned clever idea, based on the simple insight that we evolved when and where we did, and so it's the conditions then that have the most explanatory value. That strikes me as obviously true.

    For instance, there's a related theory kicking around, because climate change is on everyone's mind: Africa had long periods of being stably dry and long periods of being stably wet, but there's a brief period -- maybe 40,000 years or so? -- when the climate of Africa was swinging wildly back and forth, massive lakes here today gone tomorrow, that sort of thing; and it's right around then that home sapiens emerges, so the theory is that we represent in part a hominid that is somehow more climate-adaptable than others. ---- You could have just looked around at where humans ended up living and seen that, but that's not an explanation for why we are capable of living everywhere. -- But this theory does not seem to be committed to a "climate module" or something, but maybe someone has tried that.

    Rather than me just going through the same stuff you're reading and also responding to it, are there specifics in what you quoted that bother you?

    I can look at what you bolded.

    (2) sounds kind of speculative, right? But it does make sense: we face severe evidential constraints theorizing the mental faculties of early humans, but we can still figure out what their physical environment was like, so that's a way in. It's a clever idea.

    (4) is just true, isn't it? Or at least it's known that the human brain does have a considerable number of somewhat specialized modules, and that a lot of the more complex behavior we engage in (including cognition) is enabled by those modules being linked together in various particular ways. (It's all very reminiscent of Smalltalk because Alan Kay wanted computing to take biology as its model.)

    And (5) is just saying that we're stuck with our biology, isn't it? You and I choose to write different things, but the biology that enables us to read and write is almost identical.

    Here -- I'll just make what I assume is your point. Sometimes it appears we can actually overcome some habit of thought or behavior that goes so deep it might as well be innate. The example I have in mind is color constancy. There is reason to think visual artists can in some sense overcome the slightly misleading way we think about what we see. The example you have in mind is that we're programmed to reproduce but we can overcome that by moral reasoning.

    I would be interested to know what exactly painters are doing when they "see what colors are really there". Is that an after-market un-correction of the mis-correction our visual processing engaged in? The eyes do take in the "real" colors but presumably all the "original" data is destroyed without making backups. Maybe it's a matter of attention? Maybe you can train yourself to exclude contextual information about the ambient environment? --- For one thing, I assume not even painters do this all the time, but still see my blue Corolla as a kinda uniformly blue car. (There's some fading, some dirt, and some rust -- even I can see that.)

    For your point, obviously people can choose not to reproduce, so I'm puzzled about why you feel like you need to prove that, or why you think evopsych might be trying to prove that they can't.
  • Masculinity
    It's not an accident that Occupy has fallen silent with absolutely zero impact whilst there are actual workplace regulations about pronouns. It's because the former threatened Money, and the latter didn't.Isaac

    But also because pronouns are just easier right? I mean, yeah, there's the cultural fight over it, but, as you note, the policy opportunities are straightforward. Even banning teachers from using a student's preferred pronouns is straightforward, if that's what you prefer.

    But Occupy, that was a heavy lift. Wholesale restructuring of the world economy is not a before-the-legislature-breaks-for-Thanksgiving kind of thing.
  • Aristotelian logic: why do “first principles” not need to be proven?
    Basically, all the stuff telling you that the visual pathway you stimulated by imagining the moving lawnmower was you doing it, not the outside world.Isaac

    So visualizing, imagining, hypothesizing, all that sort of thing, might be accomplished at least in part by inhibiting channels to an area involved in all sorts of practical issues (wiki says error detection, reward anticipation, decision making, on and on). That's extremely interesting.
  • Masculinity


    Here's a curiosity: the crazy right, and let's pick on QAnon, has both superheroes (we've all I assume seen the images of Trump's face on Rambo's body) and supervillains (Barack and Joe and Hilary, George Soros, Bill Gates?!); the crazy left? Kathleen Stock's very presence made certain people feel unsafe, so supervillain evidently. Jo Rowling. Trump obviously. But where's the left's superhero? For some in the US back in 2016 it was kinda Bernie Sanders. Otherwise? There are heroes certainly for the left, activists, but there's not much sign of a Trump-like superhero to rally around.

    Is that the difference between authoritarian and anti-authoritarian politics? No superheroes but plenty of supervillains? I suppose we could say that's a good thing, it's just that the other thing going on is that the crazy left seems to have agreed that everyone not a hero-activist is not a bystander, not an opponent, not a villain, but in fact a supervillain. The right still seems to distinguish between the evil masterminds of the new world order and the gullible cucks and libtards that they've taken in.
  • Evolutionary Psychology- What are people's views on it?


    Interesting quote I just came across today:

    On the one hand, reading acquisition should “encroach” on particular areas of the cortex—those that possess the appropriate receptive fields to recognize the small contrasted shapes that are used as characters, and the appropriate connections to send this information to temporal lobe language areas. On the other hand, the cultural form of writing systems must have evolved in accordance with the brain’s learnability constraints, converging progressively on a small set of symbol shapes that can be optimally learned by these particular visual areas. — Stanislas Dehaene and Laurent Cohen

    The spot for the recognition of letters and such is right next the area dedicated to recognizing faces. I love the suggestion that on the one hand we have a largely innate capacity for recognizing faces, but that the writing systems we developed were designed to take advantage of just that sort of capability, so with a little specialization we get this. It's not that our writing systems are innate, but it's also no coincidence that we have the writing systems we do.

    I don't know much about the whole war over modularity, but I don't understand how lesion studies make any sense if the brain just gives us one big general intelligence. Some degree of modularity seems really obviously right.

    On the other hand, the great bulk of our behavior is going to draw on many, many modules in the brain. Exceptions might be things like flinching, ducking, those basic reflexes. But not, you know, art, or modeling someone else's beliefs, or making dinner.

    Maybe that puts me -- as if I had any expertise here, and I don't! -- in your lowercase "ep" camp.
  • Object Recognition
    activities, as: different and more than brain processesAntony Nickles

    our (human alone) relation to our understanding of our relation to objectsAntony Nickles

    Agreed.

    But to @NotAristotle's question, it does appear that objects are by and large constructed within the brain, without our awareness, and that this is true even of infants only some months old as well of many other animals. Not just by the brain as some sort of passive observer of course, but also through interaction with the environment.

    That still leaves a lot of room for human ways of relating to objects that are distinct from dog ways or hummingbird ways and so on.

    (1) where do we get the criteria for what counts as an object?NotAristotle

    Based on recent findings, some researchers (such as Elizabeth Spelke and Renee Baillargeon) have proposed that an understanding of object permanence is not learned at all, but rather comprises part of the innate cognitive capacities of our species.wiki article on Developmental Psychology

    (2) I think the issue is a "how does our brain do that" mystery. Light enters the brain through the retina, it is parsed as images (lines, shapes, colors, and so on). At what point does that assemblage of lines shapes, colors, etc. become an object? If it's the brain that does that, how does it do so?NotAristotle

    You'd have to read up on developmental psychology more than I have, but that's the place to look. The little bit of research I've read about has to do with infants, so for sure we're not talking about reasoning our way to objects -- there's almost certainly a specialized module for handling this stuff wired in, and connecting directly to a module handling some basic physics, which infants considerably less than a year old already understand.
  • Object Recognition
    One way to put this is that physical science can’t do the work of philosophy, can’t solve our concerns and confusions with our human condition. We want it to take us (our failings) out of the picture, but the process of working with objects is a human activity.Antony Nickles

    "Human"?

    Dogs don't bury bones? Beavers don't build dams? Owls don't catch field mice?
  • Masculinity
    Does nobody want to compare the behavior of the (young, online) left to the behavior of the (young and old, online and on-air) right during this period? The insular paranoid style has come back with a vengeance on both sides. The right sees SJWs and RINOs and libtards everywhere, the left white supremacists and transphobes and libertarian trojan horses. Everyone doing their part for the return of religious fundamentalism worldwide, even if your gang isn't officially a religion. QAnon looks a lot like the satanic panic, looked at one way. Hounding Kathleen Stock into retiring looks a lot like McCarthyism in almost every way -- or the Cultural Revolution, jesus.

    These last many years I have found plenty of reason to say, "That whole Enlightenment thing -- big waste of time."
  • Evolutionary Psychology- What are people's views on it?


    Just a general comment here. I think you're missing the point of the field.

    There are a lot of areas where people assume they know roughly what the explanation of some human behavior is, even if they don't know the details, and that explanation often begins with a broad gesture at history and culture.

    But sometimes there is a kind of explanation available that is really quite different. Often what the sort of explanations I have in mind have in common is that they contest the generally "intellectualist" approach to human culture and behavior. There are classic examples in the work of anthropologist Marvin Harris, who offered what we might call "material" explanations for things like religious dietary restrictions. Just as curious is the reverse: the emphasis on culture as shaping economics in the work of Marshall Sahlins. Harris in his day was about as controversial as Robert Trivers is in ours.

    All of this to say that I think evolutionary psychology is valuable at the very least for moving the Overton window here, in much the way that anthropologists like Harris and Sahlins did -- what if we don't assume we already know how this works but try, you know, the opposite? In the case of human behavior, what if we don't assume it's all cultural, but consider that maybe a great many facets of our lives make perfect sense if you remember to think of as animals first and foremost and expect that to be more than sort of the bare substrate upon which we grow our rich and marvelous cultural lives.

    What's the alternative? We're born animals but leave all that behind almost immediately? After the last 150 years of biology and psychology that sounds like a non-starter.
  • Aristotelian logic: why do “first principles” not need to be proven?
    I think the category 'logic' may be just too broad and in cognitive psychology terms isn't a 'natural kind' at all, but rather two (or three) completely separate processes, which involve both sensory data, and interoceptive modelling.Isaac

    Okay, that's helpful. Toward the end I was starting to imagine an almost adhoc building up toward the general, generalizing just as much as you need to resolve a conflict. But it bothered me that once again I was starting to treat inference rules as premises, habits as beliefs.

    I like this less abstract approach of considering what sorts of cognitive departments an organism might develop and then looking at what those could conceivably do and what that would look like. My whole approach in the last post was way way too abstract.

    it is often a matter of a 'picture' rather than a narrativewonderer1

    I do think that's really important. (Sellars used to actually draw pictures in his typescripts and commented once that everybody uses images it's just that he leaves them in. One of his two most famous papers has the word "image" in the title and the other has "myth".) These days I almost always approach probability problems by imagining a rectangle and then carving up the total space into areas. Numbers are decoration. (Bonus anecdote: Feynman describes an elaborate visualization technique he used to figure out whether a conjecture in mathematics was true or false, game he used to play I think as a grad student talking to guys from the math department. If he got it wrong and they pointed out the condition he missed, he'd reply, "Oh, then it's trivial," which is incontestable when talking to mathematicians, kind of an "I win" card.)

    Blah blah blah, I'm just so focused on linguistic and symbolic reasoning that it's hard to know what to do with visual reasoning, but if it's not obvious then I must be doing something wrong. This is probably me being too abstract again and it would be clearer if we considered how organisms like us rely on visual "input".

    I don't think I posted this but I did a little introspective experiment last week where I looked at objects on the porch and out in the yard and imagined them moving. I developed some skill at that kind of visualization as a chess player, though I'm rusty now. The result was that I did not hallucinate the objects moving, there is no interruption of the visual stream, which still shows the lawnmower in the same place, but it "feels" like I'm seeing it move. It's like hypothetical movement does fire the extra "what this means" pathway but stays off the main "what I'm seeing pathway", almost like the reverse of Capgras delusion. When I coached young players I used to tell them to imagine the pieces very heavy when they calculate so they could more easily remember which square a piece was on in their imagination. Curious.

    A chapter into Mercier and Sperber and the model is pretty exciting.

    filtered outIsaac
    suppressive feedbackIsaac

    This! I'm always forgetting how much of our mental processing is devoted to filtering. That's another point that makes my last post feel off.
  • Why should we talk about the history of ideas?
    There is an argument to be made that focusing on arguments in isolation is akin to putting all your effort into finding out the best way to walk and making the most accurate maps, while completely ignoring the question of where you are walking from or to and why.Count Timothy von Icarus

    That is exactly the sort of position I was hoping someone would advocate -- but for some reason even you hedge here and don't advocate it -- and why I didn't feel comfortable just branding @Wayfarer's lectures on history non-sequiturs.

    Saying these turns are "necessary," might be a bridge too far, but they also aren't wholly contingent as in the natural selection type theory of how knowledge progresses.Count Timothy von Icarus

    And this is just obviously right.

    Here's two points, one from the thread, and one kind of its background:

    (1) Lots of people say history has pedagogical value, that you can understand ideas better if you know their history, what they were responding to (as in the second quote), the whole context, even what came after in response.

    (2) Some people hear "the Enlightenment" and think, "Greatest wrong turn in history, still sorting out the mess it made," and some people think "Finally! That's when we got on the right path, the only trouble is staying on it."

    I think one of the issues @Isaac was raising is that (2) exerts a considerable influence over how you enact (1). Are you going to put the Enlightenment into a story in which it's the good guy, disrupting Bad Old Tradition (especially religion), or the Bad Guy, depersonalizing nature, atomizing everything, destroying the tried and true holistic understanding of things (and banishing God to fairy land). @Isaac's suggestion is, I believe, that there is no 'objective' context to recover to understand the Enlightenment; however you describe that context, before and after, is going to be shaped and colored by the story you're telling about it.

    And that's likely just true, but may leave some room for comparing stories, judging them more or less comprehensive, more or less true to the (cherry-picked) facts, just the usual stuff. I mean, of course we do that. But the calculus changes here if you recognize that all you have the option of doing is comparing stories (and what they present as evidence for themselves) to each other; it's obvious with history, but true everywhere, that you don't have the option of judging a story by comparing it to what it's about, 'reality' or 'what really happened'. Comparing stories to each other might give some hope of 'triangulating' the truth, until you remember that this triangulating process is also going to be shaped and colored by narrative commitments, just like the material we're trying to judge.

    Thanks for bringing us back to the topic. More interesting points in there than I've responded to.
  • Aristotelian logic: why do “first principles” not need to be proven?
    Basically, if the process of reasoning (which is effectively predictive modeling of our own thinking process), flags up a part of the process that doesn't fit the narrative, it'll send suppressive constraints down to that part to filter out the 'crazy' answers that don't fit.Isaac

    Corrective rather than constructive, and the consistency being enforced is that of the narrative your current model is organized around, rather than "the way the world really is" or something.

    Some of that seems almost obviously true, but here's what still bothers me: if logic is a system of constraints that enforce (or, as here, restore) consistency, even if that consistency is with something like a narrative arrived at by other means, that still leaves logic as a set of universal, minimal constraints that everybody ends up following. Our narratives may be handmade and idiosyncratic, but unless the consistency I enforce (with that narrative) is also handmade and idiosyncratic, logic is still universal.

    We don't have to go straight there. One of the things @Joshs talks about is paradigm or culture as the constraints on what counts as evidence. You could see something like that operating at the layer we were describing here as the corrective constraints. The next level up from your narrative might be this cultural layer that enforces a specific sort of consistency that would be different in another culture or under another paradigm. That's plausible. And there could be any number of layers, a hierarchy of constraints, variously idiosyncratic or cultural or community-driven, or even species-specific. But it seems like that pattern points to a minimal set at the top that looks a lot like logic, which annoys me if there's no explanation for where that set of constraints came from.

    If, on the other hand, the most general constraint level is constructed by successively generalizing from the lower layers, whatever they may be, then that sounds a bit like the story I was hoping to tell about logic emerging from our practices rather than pre-existing them. Once in place, of course they can cascade (selectively) back down through the hierarchy to constrain our belief formation and so on, so they play that normative role of something we strive to conform to, but we're striving to conform to rules we ourselves have made and can take a hand in remaking and revising. All that's needed is a mechanism for generalizing and some motivation to undertake such a project. (And I swear to god this sounds almost like the old empiricist theory of generalizing from experience.) It is still a little uncomfortable for us to be converging on very, very similar top-level constraints, but maybe it shouldn't be.

    One thing I haven't paid much attention to yet is that logic, like language, needs to be usable while it's being built. You can generalize a new higher level constraint and begin cascading that back down as soon as you build it -- and handling some specific case immediately is probably why you've built it, though it might take like forever before you get around to enforcing that constraint everywhere -- it's more of an as-needed, just-in-time thing.

    There's also some question about whether the constraints at any given level are consistent with each other. Could very well not be and that could go on until some major failure forces you to add a new level with a rule for sorting that out. And if it comes to that, this might really be a hierarchy only in the sense that it has a kind of directed graph structure where two nodes may not have a parent (only children) until there's a conflict and a parent node is created to settle that conflict.

    We're not a million miles away from Quine's web of beliefs, but he tended to talk in terms of a core area of the most abstract rules like logic, and a periphery that is the most exposed to experience. And he continually waffled on whether the rules of logic at the core were subject to revision.

    Is this all just empty model spinning, or does it sound reasonable? @Janus? @wonderer1
  • Aristotelian logic: why do “first principles” not need to be proven?
    What I'm convinced doesn't happen (contrary to Kahneman, I think - long time since I've read him) is any cognitive hacking in real timeIsaac

    I don't recall getting such an impression from Kahnemanwonderer1

    It's probably me misremembering or misunderstanding, and I'll look again. Mercier & Sperber mention in the introduction to Enigma of Reason that their model is different from Kahneman's in not really having two different types of reasoning process.

    I do remember feeling back when I was reading TFS (which, full disclosure, I didn't get all the way through) that the thrust of it was that we reason logically less than we think we do, but we can make an effort to notice when a bias has crept in and respond. (Remember the little self-help sections at the end of the chapters? "Gosh, maybe I'm letting system 1 get its way here, and I should slow down, have a system-2 look at this." To which my response was always that I already spend a hell of a lot of time in system 2, so, you know, "does not apply" boss.) If that's so, logic is still a system of rules for getting better -- meaning, more likely to be true -- answers and its status is still unexplained.

    I'll just go look at the book, but another general impression I got from the book is that we rely on system 1 so long as it works well enough, but system 2 is there for when things go wrong, and the response to surprise is that the slow, careful process takes over, and it has different rules, actually looks at the evidence, makes properly logical inferences, and so on. Which, again, leaves what logic itself is and why it works unexplained.

    But I'll go look.
  • Aristotelian logic: why do “first principles” not need to be proven?
    This all seems fine on a cursory reading.Janus

    But it's also whacko. I'm surprised you're nonplussed, but cheers.
  • Aristotelian logic: why do “first principles” not need to be proven?
    If it's to give us better belief sets (where 'better' here could be any measure for now), then we're putting the cart before the horse in our argumentation methodology, we should be saying "look how successful my belief sets are - that proves they cannot be self-contradictory", forget logic - point and counter-point should be various successes and failures in our personal lives!

    But we don't. We think it the other way round, we think that one ought hold a belief set which adheres to these argumentative rules regardless of whether it's useful or not. As if there were some nobility to doing so. Perhaps we'll be rewarded by God...?
    Isaac

    This is the main thing I'm trying to get past. I think there's a typical assumption that our beliefs have a clear logical structure and if an inconsistency has snuck in then your beliefs are in a sort of defective state, you'll make worser predictions, and you'll end up mistakenly drinking bleach. Or at any rate, false beliefs get weeded out through contact with the real world, leaving behind true ones you can safely make sound inferences from. That kind of model. Representational, computational, and rational.

    Certainly some chunks of our beliefs look to us like they were stitched together with some care, and some don't, but I'm not convinced that whatever consistency, whatever structure there is is there by choice. Even before "AI" became something people said everyday, there was talk of evolutionary algorithms at places like Facebook and Google, so complicated that none of their engineers understand them. I assume something a lot like that is true of our beliefs. There's probably something identifiable as structure in there but it's nothing at all like the two column proofs you learned in school and it's inconceivably more complicated. That's my guess anyway. The occasional dumbed down summaries of what's going on in there are what we call reasons and arguments.

    That still leaves room for an account of reason as a social practice rather than, I guess, a cognitive faculty.

    Is this roughly where you are?

    I think Kahneman's view is that we can learn how to intervene in our own thinking process, correct our misguided intuitions using logic and math, and over time thus improve our habits of thought. I'd like to believe that...
  • Aristotelian logic: why do “first principles” not need to be proven?
    My impression is that we are talking about entirely different things.Janus

    We are, yes, absolutely. I'm just kind of curious to see how it goes.

    All I'm addressing is, if you want to engage in such debates, then your argument better not contradict itself, or it won't be taken seriously or be of any use to anyone.Janus

    Maybe. I think @Isaac would agree with that -- rules of the game we play here.

    If there is such a convention, I could certainly choose to follow it, and that might be worthwhile, depending on what I get out of playing the game. That would leave a couple questions: (1) is it anything more than a convention -- a law of the universe, say? (2) if it is a convention, does it have a purpose and if so what?

    (1) I'm just going to ignore, but (2) is exactly what I'm interested in.

    You've suggested a couple times that if I contradict myself, you can't tell what I'm advocating. Let's say that's true. If I contradict myself, there's no clear response for you -- at least agreeing or disagreeing with me don't seem to be options, but you can still call me out for breaking the rules, and you can indicate you don't intend to break the rules yourself. So that's a cost you willingly incur, making the effort not to contradict yourself, and that should count for something, a bona fide of your intention to engage seriously. Someone who breaks the rules has refused to ante up, and is not taken seriously. Everyone agreeing to incur some cost, to put in a modicum of effort, builds trust. That's clear enough.

    If there's a cost to not contradicting yourself, if it takes effort, then we must be sorely tempted to contradict ourselves, must be on the verge of doing so regularly, and that doesn't sound right. I don't expect people to hold consistent beliefs, but direct self-contradiction is still pretty rare -- it's like we don't have an introduction rule for 'P & ~P', just not the sort of sentence we generate except by accident. (If there are contradictions or inconsistencies, they're generally more subtle. I searched the site for accusations of self contradiction, and, as you can imagine, the accused party universally denies that they have done so, and then there's a back and forth about whether what they said really is a contradiction or not. It's never dead obvious like 'P & ~P'.)

    I mean, maybe the cost story holds up even if the cost is minimal -- it's the thought that counts -- or maybe it works better as a package, agreeing to something nearly amounting to all of classical logic and some induction and some probability and on and on. Now we're talking quite a bit of effort.

    But is there something else? Some reason for this rule in particular? Do I have a motivation to make sure you have clear options of agreeing or disagreeing with me? I might, if we're choosing sides. Might just be politics. Anything else? There is the standard analogy of assertion as a bet -- you look at the odds but then you have to actually pick what to bet on to stand a chance of winning anything. (Cover the board and you'll tend to break even.) Do I have a motivation to gamble in our discussion? Do I stand to win anything by picking one of the two sides I have evidence for? Maybe, if it makes your response more useful to me. If I have evidence for both sides of an issue, it might not even matter which side I pick, so long as I can elicit from you more support for one side or the other, by giving you the opportunity to argue against me, or add your reasons for agreeing.

    So that's two arguments for a strategy of respecting the LNC: (1) especially when taken together with other conventions of discussion, it represents a cost incurred by participants, which builds trust; (2) it's an efficient strategy for eliciting responses useful for updating your own views. (The latter is the sort of thing apo mentions regularly, the need for crispness, all that.)

    Good enough for now, I guess. I'm still mulling it over.
  • Evolutionary Psychology- What are people's views on it?
    Are we just going to do another round of the endless consciousness debate in this thread? — Srap Tasmaner

    No.
    schopenhauer1

    But is it [ human behavior? ] amenable to science is the question.schopenhauer1

    But that is exactly the endless debate about consciousness here.
  • Aristotelian logic: why do “first principles” not need to be proven?
    but what if you addedJanus

    Well, that's the thing. It's really already in there, because we're just talking about a working hypothesis, just pragmatism. All bets come hedged.

    And if your position is self-contradictory would that not amount to being no position at all?Janus

    I don't know what to say to that because I don't see how it's a useful question. It's fighting the last war.

    Should I be afraid that I might sometimes sound like I have an opinion when, unbeknownst to me, I don't?

    Should I worry that I might try to predict whether that rock will hit me but somehow fail to make any prediction at all because a contradiction snuck in somewhere?

    Reasoning as we actually do it is a rough and ready business, constantly on the move. I can imagine arguing that contradictions get weeded out because they're inherently useless, being necessarily false, but I doubt even that's right. We often have good reason to believe both sides of a story, so we keep our options open, and for a while they live side by side. So what?
  • Evolutionary Psychology- What are people's views on it?
    Eh, evolution related to physical artifacts, and biological systems, even perhaps cognitive systems. But more complex behavior? Much more of a grey area.schopenhauer1

    Sure, but what do you take away from that?

    Are we just going to do another round of the endless consciousness debate in this thread? "Science still hasn't explained it, so it's not biology." That's a crap argument. Science is hard, and it takes a long time, and people need to deal. Why is everyone so intent on second-guessing science? Why all the armchair quarterbacking? Just say thank you and let them do their work.

    Everyone knows behavior is both nature and nurture; we're just working out the details. I think it's both natural and salutary for biology to push the envelope a bit because that's how you can find the limit, the point where you say, past here it must be something other than biology. If that means evolutionary psychology and sociobiology are still in the 'over promising' phase then the 'under delivering' will pull things back, probably too far, and the pendulum will keep swinging but with a shorter and shorter period. We hope. But if no one ever tests the biology-first approach, we're not going to learn much.
  • Evolutionary Psychology- What are people's views on it?
    that last sentence kind of contradicts what you're sayingschopenhauer1

    I meant the list as a whole -- some of the stuff on the list might be cogent critiques that are crucial to the future development of the field or even its collapse. I wouldn't know. But some of what's on there is definitely not that, so the list as a whole is not, say, evidence that the field is disreputable or something. That's all I meant.

    Anyways, I think it's fine as a discipline. However, I see it really straddling the line. It's not just a field of study. It's underlying premise is that various behaviors, some very specific ones, can be traced back to processes that are hard to prove.schopenhauer1

    Proof isn't exactly on the table anyway. I think what you're saying is that evolutionary explanations of behavior are inherently more speculative than other sorts of explanations, and I'm not sure that's true, because we have some pretty solid ideas about how evolution works, so at least the foundation is solid, even though shifting all the time. Cultures and languages also evolve, and the mechanisms are quite similar, but I think there's not much prospect of science of culture that would look much like biology. Maybe someday, but for now that appears to me at least to be beyond us.

    I think the big takeaway from the last hundred and fifty years of biology and psychology is that we are not nearly so different from other animals as we used to think. We're still trying to figure out just what is and what isn't different about us, and evolutionary psychology is the obvious terrain for whatever fights we have about it.
  • Evolutionary Psychology- What are people's views on it?
    my point isn’t some crazy outlierschopenhauer1

    No of course not, but why should you care if it's an outlier? You're an anti-natalist, for chrissakes. Outlier is where you live.

    Of course people have critiqued evolutionary psychology. Of course there are examples, especially I think from earlyish days when people were a little over-excited about the prospects for it, and some of that stuff is a bit cringe.

    But so what? It's obviously not a stupid idea. We are what we are, and the principle science of what we are is biology, and biology is completely steeped in evolutionary theory at this point. Of course there will be insights about human beings that are shaped by our understanding of evolution. How could there not be?

    (I was ever so slightly teasing you about the list because it's obviously a real mixed-bag, even to someone as ill-informed as I am. Some of what's on there is clearly going to be a defense of the ideas you were attacking. Some of it is notoriously, let's say, "motivated" attacks, not taken seriously by anyone, I think, rather like the drubbing sociobiology took mainly from stuffy humanities types. It's nothing like evidence that evopsych is a disreputable field or a field in crisis or something. Might be, but that list would have nothing to do with it.)
  • Aristotelian logic: why do “first principles” not need to be proven?
    What I meant was that within the presentation of an argument self-contradiction would make it unclear what position was being asserted, or even mean that no position is being asserted.Janus

    I just don't see how you're going to cleanly partition what is and what isn't part of an argument.

    Why am I even arguing about this?

    I don't think the LNC is useful at all as a description of how people reason or how they argue. People are frequently inconsistent, and philosophers know that better than most, not least because they accuse each other of it all the time. I see no sign that communication requires the kind of perfect consistency suggested, and I suspect there's a terribly unrealistic model of language and communication at work there.

    I doubt the LNC is even useful as an ideal to strive for. If our mental faculties are primarily geared toward making useful predictions, and those predictions are probabilistic, I don't see what the LNC even brings to the table. My beliefs are mixed, my expectations are mixed, the evidence I accumulate is mixed, and what's required of me is flexibility, continual updating and exploration. It's not a matter of adding or subtracting atomic beliefs from my store of truths; change is always cascading through the system of my beliefs, modifying the meaning even of beliefs I "retain".

    I do think I get where you're coming from, as a reformed logic guy myself. I'm not really arguing to convince you, just giving you some idea why I don't find much of value or interest in the LNC.