• wonderer1
    2.2k


    Very very insightful, and you are recognizing things that I only recognize as making a lot of intuitive sense, as a result of you having put things in your own words. It's so cool that your background knowledge allows me to communicate with you about such things, with such productive results.

    I suspect I'll have more to say after I've had some time to reread and cogitate more on your response, but I wanted to say that much for now.
  • wonderer1
    2.2k
    Not everything can be made explicit.Srap Tasmaner

    Feeling some poetry...

    From The Prophet by Kahlil Gibran, On Teaching:

    No man can reveal to you aught but that which already lies half asleep in the dawning of your knowledge.

    The teacher who walks in the shadow of the temple, among his followers, gives not of his wisdom but rather of his faith and his lovingness.

    If he is indeed wise he does not bid you enter the house of his wisdom, but rather leads you to the threshold of your own mind.

    The astronomer may speak to you of his understanding of space, but he cannot give you his understanding.

    The musician may sing to you of the rhythm which is in all space, but he cannot give you the ear which arrests the rhythm nor the voice that echoes it.

    And he who is versed in the science of numbers can tell of the regions of weight and measure, but he cannot conduct you thither.

    For the vision of one man lends not its wings to another man.

    And even as each one of you stands alone in God's knowledge, so must each one of you be alone in his knowledge of God and in his understanding of the earth.
  • Srap Tasmaner
    5k
    Kahlil Gibranwonderer1

    Nice. There are times when the obvious truth of this really hits you, and it's just as true that we learn an enormous amount from other people. Somehow.

    Also, I think it turns out I'm re-inventing the approach of Mercier and Sperber in The Enigma of Reason. Just read the introduction and there were lines that could have been in my post, which is odd. Now I don't know if I should wait to read the book until I've worked out some more of this on my own. (Call it Gibson's Dilemma: there's a story that William Gibson bolted from a screening of Bladerunner because he was in the middle of writing Neuromancer and it was too close.)

    Very chancy business, this life of the mind.
  • wonderer1
    2.2k


    I just bought the book, after looking at the Amazon description. But I've had a lot of time to think about this sort of stuff on my own. I do see some wisdom in you waiting awhile to develop your own view a bit more, in order to be better able to critically evaluate the book.

    I will say, that what I read of the book's focus on human interaction, matches up well with my view. In fact, in discussions of free will, I've often referred to myself as an interactive determinist. The interaction part is important. Anyway, I could go on about this at length, but I'm hopeful Mercier and Sperber will give me tools for communicating about it more effectively, so I'll hold off for now.

    And I'm glad Gibson bolted from Bladerunner because I loved the uniqueness of his vision.
  • Srap Tasmaner
    5k
    But I've had a lot of time to think about this sort of stuff on my own.wonderer1

    Alas, so have I. I remember -- this may have been 25 years ago -- arguing on the defunct ANALYTIC-L mailing list that producing reasons for your beliefs is (just) a practice of ours. I was very Wittgensteinian back then.

    I was much impressed with David Lewis's Convention some years ago, his attempt to ground language in game theory, and after that I began to think of Wittgenstein as a man trying and failing to invent game theory. I wanted to do something similar for logic and reason -- modus ponens would fall out as a pareto dominant strategy, that kind of thing.

    I wanted to provide a social explanation for reason, but leaving it more or less intact -- and this is the aporia that Lewis ran into, that he couldn't directly link up the convention account of language to the model-theoretic account he was also committed to.

    So recently I've decided that if I have to give up the the timeless truth of logic to get to a social grounding for reason, something consistent with psychology and naturalism, that I'll just have to give in to full-bore pragmatism, no more mysterious third realm for logic and good riddance.
  • wonderer1
    2.2k
    I wanted to provide a social explanation for reason, but leaving it more or less intact -- and this is the aporia that Lewis ran into, that he couldn't directly link up the convention account of language to the model-theoretic account he was also committed to.Srap Tasmaner

    I don't know what you had in mind regarding a social explanation for reason, but I do see there being a very strong social explanation for reason, in that logic is deeply tied in with our use of language. I speculate that logic becomes a matter of undeniable intuition as we are grasping the relationships between language about reality and reality itself. Because our intuitions about logic develop alogically when we are young, as recognition of patterns in how language relates to facts about the world, by the age we start thinking 'metalogically', those intuitions have the 'feel' of apriori knowledge.

    That is very much a matter of intuitive speculation though, and not something I feel equipped to make an evidential case for. So please point out any holes that might seem obvious to you.
  • Srap Tasmaner
    5k
    I don't know what you had in mind regarding a social explanation for reasonwonderer1

    Roughly just an "arguing first" view -- that logic is not a handy tool waiting to be used, pre-existing our use of it when arguing, but that the rules of logic come out of what we do when we discuss and argue. That looks impossible because what other criteria could we have for whether I win the argument or you do, for whether my argument is better or yours? --- Still, I'm convinced (at the moment) it has to be done.

    I speculate that logic becomes a matter of undeniable intuition as we are grasping the relationships between language about reality and reality itself.wonderer1

    I'd be hesitant to put it that way. I don't quite want to just block anything that smacks of a representational view of language -- at the very least I'd want to know why we are so strongly inclined to think of language as representational.

    Also, if you think of logic as kind of a distillation of language, something implicit in it, I think that's going to turn out to be wrong. As above, I think there's a strong impulse to think of language this way, as carrying along logic inside it as its necessary skeletal structure, but natural language is much more subtle, much more flexible, and also much wilder than logic. But I also don't think it's simply a mistake; something about the way language works, or the place of it in our lives, almost demands that we misrepresent it, so to speak.

    But now I'm speculating about your speculation...
  • Isaac
    10.3k
    It does seem to be an acquired taste, and some psychologies make acquisition much less likely. Still, there are those times when you can lead someone to a more accurate understanding of their own nature and change the rest of their lives for the better.wonderer1

    Absolutely. The thing about psychological theories is that everyone has them, you have to have, otherwise your strategies when interacting with others are random. We don't just throw darts blindfold when deciding how to respond, we have a theory about what our actions/speech is going to do, how it's going to work. That's a psychological theory.

    People mistakenly disparage psychology for merely attempting such modelling, but we all do it by necessity. Psychology is simply trying to develop other methods of doing so, something supplementary to just passively taking in the experiences in your small social circles and then 'having a reckon' about it. If psychology fails, it is its methodology that's at fault, not it's objectives. So in order to be useful (unlike many other sciences), we only have to be better than guesswork, because there isn't a 'I just won't have a psychological theory then' option.

    After I left academic research, I worked for a risk management company, and that was my exact job pitch - it's better than guesswork. That's enough, apparently to be worth a consultancy fee.

    I'm really enjoying participating on TPF, and I've already received a warning for bringing up a psychological topic, so perhaps later in a different context.wonderer1

    Ahh. The mods are kittens really. I'd have about two thirds of the membership banned within the first day of having such powers invested in me - but yes, have a care. Sneak it in to a thread about something else...!
  • Isaac
    10.3k


    Very nice. I'll respond simply by highlighting the differences and similarities with my own thinking...

    Being in the habit of telling each other what we know, I tell you something I think I know -- about the mind or reality or some philosophical thing -- but instead of thanking me, you disagree. This is shocking and bewildering behavior on your part. (Surprise.)Srap Tasmaner

    I'm with you regarding the habits, but I think the expected behaviour os more agreement than thanks. I think what we're looking for is confirmation that we've got it right. that can take the form of an agreement, but also the form of a passive student (after all, if they accept what we say, they've agreed we're right). Minor difference (but you'll see there aren't any major ones). Then, yes, either way - "you don't agree! Now how am I going to predict your responses?"

    If I do not understand your position at all, that's the worst case for me, because what kind of action (i.e., talking) can I engage in in response? Anything is better than this, so my first step will be to substitute for your position a position I believe I understand and can respond to. (There's a cart before the horse here. Have to fix later.)Srap Tasmaner

    Yes. Not only easing the discomfort, but this is also the most profitable policy for reducing surprise. If actual agreement (top priority) doesn't reduce surprise, then we can at least fall back on predictable narratives about conflict. If we see our 'opponent' as 'one of those types' and substitute a set of beliefs we think we know the causes of (erroneous ones), we can settle in to a little vignette which we know the script for. Recognise any of this from this very thread? "Someone mentioned history in a vaguely negative sense! I know the argument in favour of history, I'll substitute that for an argument against whatever this lunatic is actually saying".

    Glance through the major conflict threads, you'll see hundreds of examples. The counter-argument isn't against the actual argument given, it's against the script that the interlocutor ought be following, given that they're 'one of those'.

    I want to bring your views into alignment with mine, and that's why I make arguments in favor of my belief. But I probably don't really know why I believe what I believe, so I'll have to come up with reasons, and I'll convince myself that if I heard these reasons I would be convinced. But really I have no idea, since I already believe what I'm trying to convince you of; it's almost impossible for me to judge how much support these reasons give my claim. Finding reasons for what I already believe presents almost no challenge at all.Srap Tasmaner

    This is brilliant. If I could have explained it that well I would have saved myself a lot of trouble. The underlined is the part we deal with here. Of course your reasoning seems convincing to you, it already convinced you. It's what motivates the majority of the posts which begin with "Obviously..." It's such a strange beginning to a post, yet so common (I'm sure I've done it - this isn't exculpating). By the very nature of the activity you're engaged in, it 'obviously' isn't obvious, but here it is proclaiming that what you're expending all this effort demonstrating to (presumably) an epistemic peer, is, in fact, obvious.

    I'd say more about this section because it's very inline with my thinking, but unfortunately that limits rather than extends what there is to say. Just 'yes'.

    Denying the premises is really the least of my worries, because we're talking roughly about intuitions -- making this the fourth recent thread I've been in to use this word -- which I'm going to gloss here as beliefs I don't experience as needing justification. If you share my intuitions, we still have to fight about the support relation; if you don't, I can just keep daisy-chaining along until we find something we agree on. This is routine stuff, have to have common ground even to disagree let alone resolve such a disagreement.Srap Tasmaner

    Possibly. But that 'daisy-chaining' isn't at all risk free either. I think denying premises can become a serious worry when the denial is unexpected. There are premises which we hold, but expect people to genuinely hold the opposite of (like economic theory, what exactly happened last Tuesday, who's to blame for the 'state of the world', etc.), and there's premises we don't expect people to genuinely hold the opposite of and so it's easier to simply assume disingenuousness (moral sentiment, aesthetic judgement...). Coming back to what you said earlier, these areas are, not by coincidence, the same areas where we don't have a good set of reasons for why we believe what we do. As such, we lack a script for the persuasion game.

    If you start from the idea that some people will just "get it", we're still talking intuitions; as you spell out more and more steps between what your audience accepts and what they don't, this is what logic looks like. The usual view, of course, is that "being logical" makes a connection a candidate for a step in the argument; the thing is, I think we spell things out only to the point where the audience agrees, which means something they accept without reasons -- and here we're talking precisely about the support relation that holds between one belief and another, and the sorts of things I come up with are just things that sound convincing to me as someone who already believes, which means my process for producing reasons is a kind of pretend.Srap Tasmaner

    Yep. It sounds like you've reached the same point I have. what makes a support relation convincing between two beliefs we already knew (but presumably didn't have the support relation for before hearing the argument)?

    So currently (work very much in progress) I have it turned on its head. It's not that I'm looking for the support relation that my audience will accept as leading to the conclusion. It's that I'm selling the whole package of support relations as a whole (the more the better). So we might already agree that all support relations are just that, but that this whole package is less messy than that, or has fewer surprises (uncertainty), or whatever - depending on our rhetoric.

    So if we come unstuck at step 7. it's not that there's disagreement about whether step 7 is a supporting step, it's over whether step 7 fits with step 6, 5, 4, etc. Does it make a good story? I think of it like characters in a book, the author is saying "and then he thought this, and then he thought that, and then he thought..." and you (if you dispute it) are thinking "hang on, he thought this other thing two pages ago, this guy just isn't very realistic..."

    the support relation really shouldn't be presented as another belief itself, but as a rule or habit for passing from one idea to the other. (I think empiricists and pragmatists would agree on that.) So the issue at each step I have to spell out is not whether you accept a proposed connection, but your behavior -- do you pass from antecedent to consequent as I predict or desire?Srap Tasmaner

    I'm responding paragraph by paragraph (a bad habit of mine). I see you're pretty much saying what I've just said already - at least that's how I've interpreted it. I'd add that habits are heterogeneous, I don't think there's a single set, just an 'acceptable' set. Broadly, we're looking for predictability, adherence to one of the known sets, not going 'off script'.
  • Srap Tasmaner
    5k
    Not only easing the discomfort, but this is also the most profitable policy for reducing surprise. If actual agreement (top priority) doesn't reduce surprise, then we can at least fall back on predictable narratives about conflict.Isaac

    That's a piece I was missing.

    I'm selling the whole package of support relations as a wholeIsaac

    Interesting. One thing I forgot about is kettle logic.
    (For people who don't happen to know this one.)
    (Freud's analogy for the 'logic' of dreams. Comes from a joke about a neighbor returning a borrowed kettle with a hole in it: he defends himself by saying, (a) it had a hole in it when I borrowed it, (b) it doesn't have a hole in it, (c) I never borrowed a kettle from you. --- Kettle logic is actually enshrined in our legal system; briefs will often present mutually inconsistent arguments for the same result and they don't care which one the court accepts.)



    There's still something a little off though.

    If I make some claim, I might expect you to agree. (Remember our "same as me" discussion, my weird insistence that this would be the cheapest and fastest way to model you?) But suppose you don't. I said that presenting some reasons is an attempt to bring your views into alignment with mine, but that feels both obviously true and a little weak. If I now know that you disbelieve P, I should be able to model you just fine, so that's not the whole story. (Keep reading, progress below.)

    When you disagree, there is also the surprise that I've been modeling you wrong, and it feels like one of our first responses is to get a quick sitrep on that failure -- to assess just how much damage this response does to my model of you, to figure out how wrong I was. This you can definitely see on the forum: people go from noting your disagreement right to "You mean you don't think you're conscious? You can't smell the sunrise and see the flowers??" That incredulity is a siren going off at the model-of-you desk. Oh, and we need to make sure the failure is confined to you, that I haven't been getting all kinds of stuff wrong.

    But once things settle down again, likely through the emergency deployment of narrative, why do I try to change your views? That could actually be the same as what was going on above -- an attempt to determine whether I've gotten more than you wrong. Are you in fact right? Do I need to update to ~P? So I request a report from the modeling team -- why is P in the model anyway? (It's fun writing as the clueless executive. I literally don't know why P is in the model! There are some nerds somewhere who take care of that stuff...) The modeling team -- working on a deadline -- throws something together and sends it up and I show that to you. "This is what the boys down in modeling say about P, and it sounds pretty good to me." That will look like an argument, and if I didn't have you around, but were only entertaining a doubt of my own, that might be that. But now my trust in the modeling department has weakened, so by showing you their report, I'm also checking up on them, testing them. "Look, you seem to know something about this P business. Here's what my boys are telling me. Is this any good? Did they miss the boat here?"

    Around here (TPF) it's almost a certainty that your answer will be "This report is crap. Your modeling team got this one wrong." But by saying this, you've now disagreed with more of my model, and even though my confidence may have been shaken, I don't just reset to impartial open-mindedness; I may have fallen from 95 to 93.8, that's all, so your responses are still being discounted by default as overwhelmingly likely to be wrong. By disagreeing with my Official Reasons, you're just pigeonholing yourself as an anomaly for me, making the case that my model only failed to recognize how perverse you are, while getting almost everything else right.

    Through these first few exchanges, there's been no sign of the need to bring your views into alignment with mine, only a brief flirtation with bringing mine into alignment with yours. --- Actually some of the initial incredulity-driven tests might amount to "Surely you misspoke," so there's that.

    There might be something else going on here though. When I recognize that you had a genuinely different view of what I assume is the same body of evidence, that piques the curiosity of the modeling team. "How did he come up with that?" There might be a bad algorithm there worth knowing about and avoiding, or there might be an interesting inference technique there we didn't know about, and even if it doesn't change our view in this case we're always on the lookout for new inference tech. So there's going to be a strong need to know why you had a thought that I didn't. Oh, and of course this plays directly into my need to model you better! My model of you was inaccurate; I need to update it with a model of the crappy inference algorithm you're using, in case I talk to you again.

    Still no sign of needing to change your mind though, even though it looks like that's what arguments are for. The only thing I can think of is some hand-wavy thing about cooperation in the general project of all of us staying alive. I might (will! do!) prefer not to have to maintain a desk just to keep track of your screwy views and it would be easier and cheaper to bring you back into line with "practically everyone". --- Or, at least, assign you to one of the narrative departments. I just don't have the manpower to track every rando's views individually.

    That's actually not bad, and less hand-wavy than I thought.
  • wonderer1
    2.2k
    That's actually not bad, and less hand-wavy than I thought.Srap Tasmaner

    It's very good. I love the way you are off and running with this.

    I do plan on responding to your earlier posts in a more in-depth way. However, I spent all day yesterday on the forum, and my 17 year old son and I are going on a two week tent camping road trip to Yellowstone next Saturday, and I have a lot of organizing and packing to do. (I don't suppose you live somewhere between Indiana and Wyoming? It would be fantastic to be able to talk to you in person. Although I do like forum style communication a lot, because it allows for input from a wide variety of perspectives.)

    Anyway, I'm going to have to limit my forum time for awhile, and clearly you are well setup to do a lot of very productive thinking without further input from me.
  • Srap Tasmaner
    5k
    Yellowstonewonderer1

    I've been there! Just in a touristy way, not camping. It is beautiful.

    My favorite Yellowstone story. Driving along with the in-laws, and we see some cars pulled over, which is a sign there's something to see, so we pull over. There's a bunch of tourists standing around in a little picnic area and a couple of park rangers standing over by some trees talking to them, because behind the rangers out in the meadow is a grizzly bear. So this dude is standing with his back to a tree, and the meadow, answering questions about the bears and being educational. The other ranger is off to one side where he can see everyone and also glance over toward the bear. "Uh, Bill," and a nod toward the meadow, where it turns out the grizzly has covered some ground since the talker last looked. He turns to glance over his shoulder and visibly jumped! "Okay, everyone, you all need to move back now, that's it, move on back now, DON'T GO IN THE WOODS!" Just ever so slightly lost his cool as this grizzly ambled toward us, it was awesome.
  • wonderer1
    2.2k
    "Okay, everyone, you all need to move back now, that's it, move on back now, DON'T GO IN THE WOODS!" Just ever so slightly lost his cool as this grizzly ambled toward us, it was awesome.Srap Tasmaner

    That is awesome.

    I just got the bear spray I ordered last Friday. I'm hoping we get to see grizzlies in the wild. The Lamar Valley in Yellowstone, has been called the Serengeti of North America. Spending at least one day there watching wildlife is part of the plan. Yellowstone (and Grand Teton NP) is so huge that I suspect two weeks isn't going to seem like enough time.
  • Srap Tasmaner
    5k


    Joke I heard while I was there about the little bells people wear especially on particular trails: What's the difference between grizzly scat and, say, black bear scat? Well, they look almost identical, except the grizzly scat has these little bells in it.
  • wonderer1
    2.2k
    Something else I want to bring up for consideration, is that issues in communication are not so simple as being a matter of differing sets of intuitions. Another important factor is variation in the constellations of cognitive strengths and weaknesses people have.

    I have visuo-spatial strengths, and often I have the experience of thinking, "How can you not see that?", because it is difficult for me to imagine what it is like to lack the visuo-spatial abilities I take for granted. On the other hand I have weaknesses in processing speed, and would be a horrible umpire, with people yelling at me, " How can you not see that?"

    So an aspect of communicating skillfully for me, is to develop some sense of where an individual's cognitive strengths and weaknesses lie. With at least some sense of an individual's cognitive strengths and weaknesses, I can try to capitalize on the strengths of the individual and work around the weaknesses to improve communication. Perhaps we all do this subconsciously to some extent, and pragmatically we don't tend to have much other option than to go with our intuitons on such matters. I just wanted to point out this complicating factor, because I'm a complicator and that's what I do. :wink:
  • Srap Tasmaner
    5k
    variation in the constellations of cognitive strengths and weaknesses people havewonderer1

    Perfect! For me. Just moments ago I realized I meant to say something about Whitman over in the thread where I'm pissing on the law of non-contradiction:

    (I am large, I contain multitudes.)

    And what you posted is almost exactly what I wanted to say. Ask me a question and I'll be responsible for the answer I give, as a person, as a moral agent by society's reckoning, but that doesn't mean it was "I" who answered. Some cognitive lieutenant piped up and said, "I've got this one, boss." We have many many specialty departments, and one of them produces the answer I (the person) give.

    This is perfectly clear in some linguistics research. You can identify a race between concurrent processes -- maybe one applying the "-ed" rule and one looking up the irregular preterite -- and whoever gets there first wins. Availability bias is obviously like this too, and suggests multiple sources of the answers we give, the things we say.

    I've complained about it, recently, with respect to my own posting habits, when I notice that I'm giving a type of answer out of habit, even if it's no longer representative of my thinking. We say stuff, and sometimes the stuff we say strikes even us as someone else talking with our voice.
  • wonderer1
    2.2k
    thread where I'm pissing on the law of non-contradiction:Srap Tasmaner

    Yeah, just did some pissing of my own. I need to throw my Kindle in the toilet now, and try to break my TPF addiction.
  • Srap Tasmaner
    5k


    Have a great trip! We'll be here when you get back.
  • Isaac
    10.3k
    There's still something a little off though.Srap Tasmaner

    I feared as much...

    now my trust in the modeling department has weakened, so by showing you their report, I'm also checking up on them, testing them. "Look, you seem to know something about this P business. Here's what my boys are telling me. Is this any good? Did they miss the boat here?"Srap Tasmaner

    Yes. So we can come at this from an information-first perspective and say that I'm using you (or vice versa) to update my beliefs about some external state, say a simple situation in which you've witnessed and event I didn't see - let's use last night's match as an example (what was Wenger thinking sending Walcott on that early? - no, let's not go there again). So I have virtually zero certainty about what happened, you tell me, and I update my beliefs. So far so simple, but if I have some certainty, I'm still going to use the same policy (maximise my model evidence) only this time, I have a more complete model. The consequence of model evidence maximisation policies is that they tend to be confirmatory. If we take perception as simpler example, perhaps. If I think what I'm seeing is a table, I'm not going to scan the whole scene like a dot-matrix printer, I'm going to go straight to where the legs should be, confirm there's four, and retire ("yep, table - called it"). So transferring to communication (still in enquiry mode for now), I'm going to use you to update my model, but only under my model evidence maximisation policy for whatever I already slightly believe. That means I'm interrogating those bits of your belief that will confirm mine.

    So, my checking is directed, I'm not offering up all my model, nor am I interested in all of your model. I'm only interested in that specific bit of your model that might most efficiently confirm (or possibly rule out) my model.

    So when you say...

    Through these first few exchanges, there's been no sign of the need to bring your views into alignment with mine, only a brief flirtation with bringing mine into alignment with yours.Srap Tasmaner

    ... I think this is right. Basically, my first pass is going to be so honed on model evidence maximisation that's it's almost more a test of how useful a source you are, a check to see if you're going to have the confirmatory evidence I need. When you give something completely unexpected in response, you're no longer a useful source for model evidence maximisation. The policy has to change. So...

    There might be something else going on here though. When I recognize that you had a genuinely different view of what I assume is the same body of evidence, that piques the curiosity of the modeling team. "How did he come up with that?" There might be a bad algorithm there worth knowing about and avoiding, or there might be an interesting inference technique there we didn't know about, and even if it doesn't change our view in this case we're always on the lookout for new inference tech. So there's going to be a strong need to know why you had a thought that I didn't. Oh, and of course this plays directly into my need to model you better! My model of you was inaccurate; I need to update it with a model of the crappy inference algorithm you're using, in case I talk to you again.Srap Tasmaner

    ... now it's about you the external state, not you the source. Coming back to perception, I've checked where the legs should be, disaster! (for the model), not only do I find no legs, but no ~legs either. Nothing. It's dark. My saccade policy has failed. Now it's about the external state 'darkness'. What's going on and what can I do to remedy this failure of my model optimisation?

    I think here is where most interaction sits in philosophy-type conversations (politics, social arrangements etc. too). The cost of using you as a source is so high, since our respective models are so misaligned, that my best surprise minimisation strategy might well be to fix you. I need you thinking like me so that I can use you as a future source. You're no use to me constantly surprising me with completely left-field models, because I don't doubt my own models that much, that I'm going to 86 them and insert yours wholesale.

    And I think this is where habits of thinking come in. The effect of this practice over time in communities is to hone in a basic set of thought habits that at least keep us all vaguely useful to each other as model evidence maximisation sources.

    Or in your own words...

    some hand-wavy thing about cooperation in the general project of all of us staying alive.Srap Tasmaner
  • wonderer1
    2.2k
    Have a great trip! We'll be here when you get back.Srap Tasmaner

    I'm not leaving until Saturday, and now I'm done with trip prep for the day. So perhaps I'll get somewhat caught up with responding to you before I go.
  • wonderer1
    2.2k
    Absolutely. The thing about psychological theories is that everyone has them, you have to have, otherwise your strategies when interacting with others are random. We don't just throw darts blindfold when deciding how to respond, we have a theory about what our actions/speech is going to do, how it's going to work. That's a psychological theory.Isaac

    Exactly. Furthermore, it seems worth pointing out that everyone has one, but some are based on looking into the evidence and some aren't. (Not that I need to tell you that.)

    If psychology fails, it is its methodology that's at fault, not it's objectives.Isaac

    "If psychology fails" to me, seems a question that would come out of an excessively dichotomous way of looking at psychology. I'd think a more relevant question is whether there is progress in psychology. From my perspective its pretty undeniable that progress is ongoing in psychology. We are fantastically complex creatures though, so of course progress takes time.
  • Isaac
    10.3k
    it seems worth pointing out that everyone has one, but some are based on looking into the evidence and some aren't.wonderer1

    Yes, that's sort of what I was trying to get at. No matter how bad our methodology is, it can't be abandoned unlike, say the study of black holes prior to the equipment needed to measure them. In the latter case we could say, "oh well, we'll never know" and go about our daily lives ignoring the issue. In the case of psychology, that's not an option, we have psychological theories, we act on them every day, our political policies are built on them. Even a modicum of slightly scientific analysis is better than none. a 1% replication rate is 1% more certainty than we had before.

    We are fantastically complex creatures though, so of course progress takes time.wonderer1

    Yes. And ever changing. If psychology is affected by culture (and I'm certain it is) then what was true yesterday in psychology might not be true today. We're playing catch up.
  • Count Timothy von Icarus
    2.8k
    Having gotten distracted by the minutiae of justification, I would just offer up that how someone sees the relevance of history in philosophy likely depends on their philosophy of history.

    Should we consider philosophical questions largely in isolation or should we be thinking in terms of a larger picture, e.g. "where is human thought coming from and where is it going?" Are there patterns in philosophical thought such that we can see where we might be headed from where we have been?

    Is the "Great Conversation," the canon of philosophy, simply a collection of influential works that happen to cite one another, or is it an example of an unfolding dialectical process? Do we continue to study the works of Plato, Kant, etc. because there is something truly great about the primary sources? Or do we keep going back to reshape their work for our times, in a way "reshaping," the history itself? If the latter, is there any discernible pattern to how this is done?

    Is "philosophy... [its] own time apprehended in thoughts?"

    For skeptics of the speculative attitude, I'll just throw this quote out there on speculative history more generally and add a bit more below:


    Speculative philosophy of history, then, stems from the impulse to make sense of history, to find meaning in it, or at least some intelligible pattern. And it should not surprise us that at the heart of this impulse is a desire to predict the future (and in many cases to shape it). By any standards, then, this branch of philosophy of history is audacious, and there is a sense in which the term ‘speculative’ is not only appropriate but also carries derogatory implications for those historians and others who insist on a solely empirical approach to the past, i.e., on ‘sticking to the facts’...

    To others, however, it is a worthwhile undertaking because it is so natural to a reflective being. Just as at times one gets the urge to ‘make sense’ of one’s own life, either out of simple curiosity about its ‘meaning’, or through suffering a particularly turbulent phase, or because weighty decisions about one’s future are looming, so some are drawn to reflect, not on themselves, but on the history of their species – mankind.


    Whether speculative philosophy of history is worthwhile or, instead, a fundamentally flawed exercise, it is surely an understandable venture.Firstly, attempts to discover a theory or ‘philosophy’ of history are intrinsically interesting because they try to make sense of the overall flow of history – even in some cases to give it meaning.And there is a sense in which to do particularly the latter is to offer answers to the question, ‘what is the point of life?’ (not yours or mine, but human life in general.)

    The importance of such a question is either self-explanatory or nil, depending on an individual’s assumptions.Some see it as the ultimate question to be answered, whereas others see it as symptomatic of an arrogant anthropomorphism which demands that ‘life, the universe, and all that’ be reduced to the petty model of merely human dimensions, where intention and reason are seen as the governing principles.But that individuals differ in this way is exactly the point, in the sense that speculative philosophy of history raises the issue directly into the light of argument, allowing us to examine our initial assumptions regarding the value or futility of such ‘ultimate’ questions.

    For example, one might ask sceptics whether they at least accept the notion that, on the whole, ‘history has delivered’ progress in the arts, sciences, economics, government, and quality of life. If the answer is "yes," how do they account for it? Is it chance (thus offering no guarantees for the future)? Or if there is a reason for it, what is this ‘reason’ which is ‘going on in history’?

    Similarly, if the sceptics answer ‘no’, then why not? Again, is the answer chance? Or is there some ‘mechanism’ underlying the course of history which prevents overall continuous progress? If so, what is it, and can it be defeated?


    From M.C. Lemon's Philosophy of History

    Additionally, if we believe science tells us true things about the world then presumably we believe that at least one human project does undergo progress. To be sure, we don't think it always gets things right, but we also tend to think that a biology textbook from 2020 should be much closer to the truth than one from 1960 and that one from 1960 gets more right than one from 1860.

    If we can make progress here, such that human beliefs hew closer to the truth over time, why not in other areas? Why not in some or all areas of philosophy?

    Humans are goal driven and can accomplish their goals. Indeed, a big trend now is to ground the emergence of meaning in an "essentially meaningless," physical reality in the goal oriented nature of life itself. Groups of humans also accomplish intergenerational goals, e.g., the great cathedrals rose over several generations. Whole nations have at times been successfully mobilized to accomplish some goals, e.g., the standardization of Italian and German out of several dialects in the 19th century, or the revival of Hebrew as a spoken language after 2,000+ years. This being the case, what stops us from recognizing a broader sort of global "progress," or a more narrow sort of progress in some areas of philosophy?

    If the reason progress is impossible is because "progress" can't be statically defined long term, is there any pattern to how we redefine " progress," over time?

    I don't want to derail the thread, if we want to have a thread on the philosophy of history we can, these questions are more rhetorical. Obviously the relevance of history changes depending on how you answer them. There is an argument to be made that focusing on arguments in isolation is akin to putting all your effort into finding out the best way to walk and making the most accurate maps, while completely ignoring the question of where you are walking from or to and why.

    Just as an example, the cooption of Peirce by the logical positivists is relevant re: questions on ontology writ large if we see logical positivism largely as a reaction against the influence of Hegel. The move wasn't entirely reactionary though, it doesn't go back to mechanism, but instead moved to an empiricism so radical that I honestly find it closer to idealism than today's popular mechanistic accounts of physicalism. In this, and many other ways, it is more a synthesis of Hegel with other contradictory strands in philosophy. Hegel was sublated and subsumed within the new "logical positivism," and this helps us see why logical positivism was born containing the seeds of its own destruction. A set of implicit contradictions was there from the beginning, just waiting for folks like Quine to make manifest.

    If ideas and theories don't simply "evolve," due to natural selection towards truth (i.e. Fitness vs Truth Theorem), but rather advance through a dialectical model, then history is a good deal more "active," in how thought develops in all contexts. Saying these turns are "necessary," might be a bridge too far, but they also aren't as contingent as in a "natural selection-like" theory of how knowledge progresses. Something like an attractor in systems theory might be a better way to conceive of the dialectical advance, maybe blended with the idea of adjoint modalities and the way a proof of one object serves as a proof of others (a key intuition in the development process of category theory from what I understand).

    Obviously the above example would need a lot more fleshing out than I want to put into it to be compelling and I certainly don't want to presuppose Wayfarer was thinking anything like this. Not all speculative history need be quite so Hegelian.
  • Srap Tasmaner
    5k
    There is an argument to be made that focusing on arguments in isolation is akin to putting all your effort into finding out the best way to walk and making the most accurate maps, while completely ignoring the question of where you are walking from or to and why.Count Timothy von Icarus

    That is exactly the sort of position I was hoping someone would advocate -- but for some reason even you hedge here and don't advocate it -- and why I didn't feel comfortable just branding @Wayfarer's lectures on history non-sequiturs.

    Saying these turns are "necessary," might be a bridge too far, but they also aren't wholly contingent as in the natural selection type theory of how knowledge progresses.Count Timothy von Icarus

    And this is just obviously right.

    Here's two points, one from the thread, and one kind of its background:

    (1) Lots of people say history has pedagogical value, that you can understand ideas better if you know their history, what they were responding to (as in the second quote), the whole context, even what came after in response.

    (2) Some people hear "the Enlightenment" and think, "Greatest wrong turn in history, still sorting out the mess it made," and some people think "Finally! That's when we got on the right path, the only trouble is staying on it."

    I think one of the issues @Isaac was raising is that (2) exerts a considerable influence over how you enact (1). Are you going to put the Enlightenment into a story in which it's the good guy, disrupting Bad Old Tradition (especially religion), or the Bad Guy, depersonalizing nature, atomizing everything, destroying the tried and true holistic understanding of things (and banishing God to fairy land). @Isaac's suggestion is, I believe, that there is no 'objective' context to recover to understand the Enlightenment; however you describe that context, before and after, is going to be shaped and colored by the story you're telling about it.

    And that's likely just true, but may leave some room for comparing stories, judging them more or less comprehensive, more or less true to the (cherry-picked) facts, just the usual stuff. I mean, of course we do that. But the calculus changes here if you recognize that all you have the option of doing is comparing stories (and what they present as evidence for themselves) to each other; it's obvious with history, but true everywhere, that you don't have the option of judging a story by comparing it to what it's about, 'reality' or 'what really happened'. Comparing stories to each other might give some hope of 'triangulating' the truth, until you remember that this triangulating process is also going to be shaped and colored by narrative commitments, just like the material we're trying to judge.

    Thanks for bringing us back to the topic. More interesting points in there than I've responded to.
  • ssu
    8.6k
    I've often wished math and science were taught with more of an eye to history.Srap Tasmaner
    Great point! Unfortunately in the school and even in the university science and especially math isn’t taught like this: how not only the mathematician/scientist came to the conclusion, but how the scientific community accepted the result. There simply isn’t the time. Hence you are taught the theory, the proof, the conclusions. And that’s it, then forward. Not much if anything on how it was done, what were the objections, possible earlier errors etc.

    Knowing the history, the older ideas, the now ancient technology and methods used makes it all far more clear. It’s not just that you simply learn by heart to use the science/math like an algorithm to answer a certain question.
  • wonderer1
    2.2k
    Yes. And ever changing. If psychology is affected by culture (and I'm certain it is) then what was true yesterday in psychology might not be true today. We're playing catch up.Isaac

    Good point about the moving target. Furthermore, I'd think propagation of psychological understanding itself contributes to the target moving.
  • Count Timothy von Icarus
    2.8k

    -- but for some reason even you hedge here and don't advocate it --

    My reticence isn't so much due to the fact that I find speculative history to be wrong-headed or hopeless, but rather that it's almost impossible to advance such a theory in a convincing and well-supported manner while also keeping the argument short. It's also the type of thing where coming up with criticisms is much easier than convincing positive arguments, and I wouldn't want to get side tracked defending the particular merits of some one theory when the question is more about the merits of speculative history and how it interacts with philosophy in general.

    (2) Some people hear "the Enlightenment" and think, "Greatest wrong turn in history, still sorting out the mess it made," and some people think "Finally! That's when we got on the right path, the only trouble is staying on it."

    The "we went off on the wrong track here," type arguments aren't necessarily without their merits, but these tend to be arguments about how there is some "truth" or "ideal" out there and how we can discover/actualize it, rather than being an attempt to describe progress as a whole.


    @Isaac's suggestion is, I believe, that there is no 'objective' context to recover to understand the Enlightenment; however you describe that context, before and after, is going to be shaped and colored by the story you're telling about it.

    This is certainly true to some degree. There is no one objective frame in the first place because different people have different opinions within their own eras, and oftentimes these are diametrically opposed. The same people also view the same events differently at different times. Nor are trends ever absolute; every period of "romanticism," has its rationalists, every period of "rationalism" has its romantics.

    That said, I don't think this leaves us unable to analyze intellectual history at all. We can observe that Renaissance thinkers "rediscovered," classical culture in an important way. We can spot major swings in US culture when comparing the 1950s and 1960s and be quite confident in describing real differences in trends. The problem is often one of degree, we can overemphasize some trends, etc. There are different reasons we have for turning to history, so how deep we go in exploring nuance is something that gets determined on a pragmatic basis.

    Moreover, the problem only seems so intractable if we insist on seeing history in terms of agents and intent, a stage where individuals are running the show. "How can you privilege this or that voice? How can you be sure this description of events isn't self-serving?" These are certainly valid questions, but questions about individuals' original intent are only paramount if we think man is firmly in the driver's seat of history, that there is one proper unit of analysis in human affairs: the individual. I think this is a major mistake.*

    There is plenty of work in the social sciences to suggest that institutions have goals that aren't continuous with those of the individuals that compose them, that organizations exhibit emergent forms of intelligence in problem solving, and that "group minds" are a useful way of understanding some emergent behaviors. From ant colonies, to lymphocytes, to neurons, we see patterns in how complex systems work, and these seem to apply to human social organizations. So, just as no one ant knows what the hive is doing, the same can be true for us vis-a-vis history.

    This is what Hegel gets most unambiguously correct. He's at the same time an early progenitor of complexity studies and still on the bleeding edge of being willing to follow it to its logical conclusions. What "a man," thinks doesn't drive history in the long run, but what rather what "mankind," thinks. We are but accidents of a social "substance," i.e., "Spirit." His teleological claims about where Spirit is headed is less supportable.

    Seen from above, the various threads of philosophy over the years look akin to the "terraced deep scanning," preformed by lymphocytes as they dynamically explore a massive sample space in an attempt to solve problems. Some areas get explored more thoroughly than others, some lines of inquiry receive more resources at one time, and multiple lines work in parallel.

    IMO, the problem of sorting out bias is not the central problem when considering history, although it is a real one. The larger problem is that we're in the role of a neuron having to explain what the entire brain is doing, or a fish being asked to explain the behavior of its school. This is why we have needed to build such a massive apparatus of data collection and analysis, and so many separate fields of inquiry in the social sciences. Our narratives are akin to neuronal action potentials or honey bees dances; they're the way individual components of the system talk to one another.

    However, this doesn't doom civilization's attempts at self-knowledge any more than a human being's mind being the work of small components precludes us from having a sort of emergent, imperfect self-knowledge. Sure, a sole neuron is never going to understand the brain alone, but then the neuron doesn't work alone either. History writ large is communicated, and its information processed, by systems of people, not individual people in isolation. I think the correct analogy is a perceptual system, a mind mulling something over, not a map that gets pieced together by individuals.

    But if this is true, than science and philosophy are also not a mapping process, but more akin to group cognition. In the big picture they are just another link in a great chain of systems whereby being encodes being, representing itself to its self. This chain continues in ever higher levels of emergence, from the most primitive genomes, to nervous systems, to language, to cultures, and upwards, with each system undergoing its own form of natural selection and evolution in a sort of fractal recurrence.

    To what end? We can consider that the earliest life didn't "understand," the universe so that it could survive, but rather survived because it somehow encoded relevant information about the enviornment into its structure. In life, "knowledge" pre-dates goals (as only makes sense, you have to know something to have goals). But goals aren't irrelevant to survival in intelligent life, they just take time to emerge. We as individuals have goals, organizations have goals, but my guess is that we have yet to reach a point where the highest order organizations we are a part of can have goals.

    And perhaps goals undergo their own sort of selection process?



    But the calculus changes here if you recognize that all you have the option of doing is comparing stories (and what they present as evidence for themselves) to each other; it's obvious with history, but true everywhere, that you don't have the option of judging a story by comparing it to what it's about, 'reality' or 'what really happened'. Comparing stories to each other might give some hope of 'triangulating' the truth, until you remember that this triangulating process is also going to be shaped and colored by narrative commitments, just like the material we're trying to judge.

    Exactly. Sort of like how the the visual cortex doesn't work with any of the original light waves that are the "subject" of sight, and the components of the auditory cortex don't have access to, or communicate with sound waves. Narratives are the action potentials of history.


    *(Interestingly it is also a mistake that human beings make almost universally vis-a-vis nature, both:

    A. Early in the development of civilizations - i.e., animism is ubiquitous, seeming to occur across cultures until a civilization develops some form of philosophy that starts to look for abstract principles that determine how nature works. E.g., "the river floods or doesn't flood because it wants to, the rock falls because it wants to, etc.

    B. Early in human development. Research shows that young children are far more likely to describe events (including those involving only inanimate objects) in terms of agency than adults.

    This is an interesting parallel. Do people in a more advanced society need to retread the mental routes their ancestors have taken to reach the same developmental stages? Or maybe it is a coincidental similarity?)
  • Srap Tasmaner
    5k
    That said, I don't think this leaves us unable to analyze intellectual history at all. We can observe that Renaissance thinkers "rediscovered," classical culture in an important way. We can spot major swings in US culture when comparing the 1950s and 1960s and be quite confident in describing real differences in trends.Count Timothy von Icarus

    Okay this is the perfect example.

    What will you say about the difference between the 50s and the 60s? Let's say this comes up in a discussion here on the forum, and broad strokes are acceptable. You want to describe the difference, how will you do that? What words will be in your description?

    There are to start with the two obviously opposing views, which I won't rehash in any detail. The 60s was either a time of liberation or of everything going to hell. But suppose you don't want to say either of those because you're doing philosophy, you're being scrupulous, you don't want to rely on an explicitly tendentious description of the 60s, so what will you say instead?

    You might just state some facts, by "facts" here meaning statements about the 60s you assume will be for the most part uncontroversial. More young people read Marx than in previous generations, and more claimed to have read Marx. Young people in considerable numbers publicly protested many government policies relating to the war in Vietnam. Many people protested racialized laws and police practices especially in segregated Southern states and large cities throughout the country. Blah blah blah. We're going to aim for neutrality here.

    My issue is not to nitpick over how neutral you can manage to be, but this: the more neutral you manage to be, the less likely it is that what you say has any direct connection to the larger argument you're making. That won't be true for all cases, obviously; if someone claims to prove that young people have never taken to the streets, that argument lands on a factual claim which can be refuted with a counterexample.

    But the cases I was interested in look more like this:

    A: The Industrial Revolution was a mistake.
    B: Why do you say that?
    A: The steam engine was invented in the 18th century and the first commercial use of Watt's improved design was in 1776 by an ironworks...

    Etc etc etc.

    By being scrupulously neutral in your description of history, you force the reader to 'connect the dots', to figure out what inferences you intend them to make. In a case like this it's obvious there's some connection, and depending on the rest of the paragraph many more connections might be implied or inferred, but none of that is actually stated. This is exactly the point I was addressing in the OP.

    So to make the point you're making clear, in many cases, you'll have to give up on this scrupulous neutrality and give in to being at least a little tendentious. In a lot of cases. Where you only need facts to support your argument, no. But if you need something taken as something for it to hook into your argument or your claim explicitly, for it to be anything more than obiter dicta, then you're down in the trenches offering an interpretation.
  • Count Timothy von Icarus
    2.8k


    Exactly. This is true with the sciences too. If we want to challenge the existing paradigm in any way we need to make more theoretical arguments. If we retreat to only looking at the well-verified, replicated empirical observations, we can only say certain types of (generally uninteresting) things.

    You can't explain empirical results coherently without some degree of theory-ladenness in the first place. If you avoid making any theory-laden claims then at best you implicitly cede the role of explanation to the dominant theories, at worst you just have an incoherent list of observations.

    But I also get why people simply make historical links between ideas without advancing a theory. I think the appeal here is due to how our cognition functions. When I read, "Spinoza," or "Descartes," there is a whole rich web of interconnected concepts attached to those names. I don't have to unpack all those concepts to use them. Sometimes it's easier to do analysis on compressed data (e.g. it is easier to see that 10*10^900 > 10*10^899 in this format than by looking at the numbers written out in decimals). The names become a vehicle of tremendous data compression. Which is just to say that "x also thought y," may be able to do a lot of heavy lifting in tying together concepts, but only if people actually share the same reference points, which they often don't in philosophy because the field is too big.

    IMO this works because the "parallel terraced scan," does indeed explain how the mind works in key respects. This is also why sentences in specialized sub-disciplines sometimes appear to be saying completely trivial things as an excuse to drop in names or arcane terms (granted, this also happens to paper over lack of substance or due to bad writing skills).

    I also think people like historical narratives of how science, math, etc. develop because we are innately geared towards remembering people, conflicts between people, social interactions, etc. as a social species. Hence the desire to "tag" abstract ideas to some individual or group.
  • Srap Tasmaner
    5k
    I also think people like historical narratives of how science, math, etc. develop because we are innately geared towards remembering people, conflicts between people, social interactions, etc. as a social species.Count Timothy von Icarus

    Yeah I'm not contesting any of that.

    Let's put it another way: suppose you're making some argument and you have in mind a particular interpretation of the 60s that would support your claim; but instead of presenting that version, you present a scrupulously neutral presentation of the 60s at the point where your tendentious interpretation would hook into the larger argument you're making. The reader either gets what you're (not) getting at or they don't.

    But what you've done is suppress your reason for referring to the 60s at all by moving to the scrupulously neutral version, and you've done this instead of just not reaching for the 60s in making your argument. You're trying to have your cake and eat it too, and violating Grice's maxims. It's not about whether the point you're making is persuasive or worth considering or 'legitimate' in some sense; it's the roundabout way of (not) making the point that is at issue.

    I do the same thing you did, where you suggested that 'an argument could be made ...' I use that one. I also use 'Some might argue ...' I think that's acceptable when it's really and truly not my position but a position I want to talk about or I want someone else to talk about. (I tried it several times in this thread.) I also use that when I'm not sure what to say about it except that it's a position that occurred to me is possible.

    But it might be a habit worth breaking, or at least it might be better just to directly say what those little phrases are standing in for. I think I'm happier when I just say things like "I think there are three options here..." and then lay them out. No confusion there about whether I'm advocating in a deniable way, etc.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.