Comments

  • Some questions about Naming and Necessity


    I'll add one little note, relevant to the issues raised in the OP about essential properties.

    In the collected papers of Ruth Barcan Marcus, there is a transcript of a discussion between Marcus, Quine, and Kripke, who was (iirc) at the time maybe not yet 20, and I forget who else. Anyway, I remember a specific exchange where Quine said that Kripke's approach would require bringing back the distinction between essential and accidental properties, and Kripke agreed, but didn't consider that the fatal flaw Quine did.

    I think there was some bad blood later, Marcus or people on her behalf claiming that the causal theory of names was stolen from her.

    Anyway it's interesting to see Quine's star student (and then later Lewis) already plunging into waters he was deeply apprehensive about.
  • Some questions about Naming and Necessity


    Right right. It's been years since I read this. I've got nothing to contribute on "what Kripke would say" so I'll mosey along.
  • Some questions about Naming and Necessity
    whether the statement-type designation -- "He is the person about whom I say . . ." -- is rigid.J

    It just seems obviously not to be.

    1. It has an indexical in it. I think that rules it out from the jump.

    2. As phrased, it names a class of actual performance, without even a ceteris paribus clause. The obvious way to strengthen it is to shift to talk of dispositions. But c.p. clauses and dispositions have known issues.

    What you seem to want is really an in-between category of "rigid-for-you".
  • Some questions about Naming and Necessity
    I think Srap Tasmaner is basically saying he doesn't think at all when he's not engaging another person. I think he's saying he's not even conscious of the world around him until he discusses it, at which point a sort of negotiated narrative comes into being.frank

    In this case, even to the degree that I am engaging with another person, I am speechless.
  • Some questions about Naming and Necessity


    I'm sure it's my fault. Of course it should be possible to provide an account of what makes names names, what makes them special, what their role in language is, what makes them different, and this is the sort of thing Kripke is up to. Sure.

    But we were also talking about reference as such, and it's clear to me that an account of names in terms of baptism, or words in terms of stipulation, can't also serve as an account of reference but presumes it. If you want to teach someone "blork" means that thing, you have to already be able to successfully refer to that thing. (I think Wittgenstein raises similar objections to theories of demonstrative teaching, as if pointing "just worked".)

    So talk about stipulation and teaching all you like, but it doesn't get you to that level of originary reference you're chasing, the intentionality you cannot be mistaken about. It relies on that; it doesn't explain it or even describe it.
  • Some questions about Naming and Necessity
    If I teach others that my shriek refers to Mr. Champagne, in what way could this reference fail for others, or be mistaken on my part?J

    This question is a non-starter. You're presuming the entire system of conceptualization and language usage is at your disposal, and then all you're doing is in effect introducing a word by stipulation. It is interesting that we can do this, but it doesn't get anywhere near addressing the questions you're interested in.

    We're actually covering similar territory to the memory discussion. My position is that rather than the pure phenomenal experience we overlay with narrative, which we can then strip away, all we've got is narrative. The process you imagine of "stripping away" is real, but creative, it's making a new thought out of the thoughts that came to you not just enmeshed in context, but constituted by our systems of understanding and communicating. I don't think you really have the option to just set those aside and recover some original underlying experience ― you never had access to any such experience.

    The idea, as I conceive it, is similar to Sellars's argument in "Empiricism and the Philosophy of Mind": he allows that there must be some sort of raw inputs to our thinking processes, but denies that they have any cognitive status whatsoever. In particular, they cannot serve the Janus-faced role thrust upon them, linking on the one side to purely causal processes of sensation, and on the other side to our conceptual apparatus of knowledge and reason. Nothing can fill that role.

    As it is vain to seek the primordial unconceptualized experience, it is vain to seek the originary act of referring within a mind that knows no other minds.
  • Some questions about Naming and Necessity


    Well, I'm not even sure what we're talking about now, but it looks like you are trying to create one of Wittgenstein's private languages. You want to have in hand an association between an object and something, a name, a referring expression, or a bit of behavior, and for that association to be something you can't be wrong about.

    I think there are a couple layers to this. One is the apparent incorrigibility of attention: when I think of something, perceived or recalled or imagined, even if I am making important mistakes about the properties of that thing, even if I misidentify it, I cannot be wrong about it being the object of my thought (or intention). In my pitching example, the guy is remembering something someone did, even if it wasn't who he thinks it was or in the circumstances he thinks it was. There is, we want to say, a pure, original, and unimpeachable phenomenal experience underlying the stories we tell about it, even if those stories are all wrong. Even if it turns out the thing you're thinking about, that you think you remember, never happened, it's still what you are thinking about.

    It's a compelling vision, but I suspect it is fundamentally mistaken.

    When we come to language, the act of referring seems somehow to share in the unimpeachability of attention. The additional problem here is that "refer" is one of Ryle's "success words", so when we attempt to describe reference we describe successful reference. The downsides here are that (a) what is genuinely interesting, impressive, or mysterious is the element of "success" rather than something specific to referring; (b) our vocabulary blocks a proper comparison of successful and unsuccessful attempts at reference; (c) by being defined as successful, reference seems to take on the color of incorrigibility we associate (I think mistakenly) with attention.
  • Some questions about Naming and Necessity


    Referring is something done by fiat.frank
  • Some questions about Naming and Necessity


    I'm a little unconvinced by the "about whom I say..." locution, precisely because we're lacking a guarantee that the sentence the speaker utters means what he thinks it means (or "what he intends it to mean" or "what he means by it").

    I know the tendency of this analysis is to brush off mistakes, but suppose you point out to the speaker -- for easy examples, imagine the speaker isn't quite fluent in the language he's using -- that the words used mean the person is a prostitute: you might end up with a speaker insisting that they wouldn't say that! You'll probably want to cover by changing your description to something like "about whom I mistakenly said ..." but that's no help. What, so you *thought* the person was a prostitute and now realize they aren't?! Doubt the speaker will agree to that. Keep trying. (See " A Plea for Excuses".)

    And in the meantime, the speaker has still failed to refer, because once words are in the mix, you're stuck with them; either you trust them to faithfully carry your meaning, as your ambassadors, so to speak, or you allow that there must be negotiation between you and the audience.

    "You know what I mean?" isn't always a rhetorical question, even when intended to be.

    It's as if, what we need to say is that when you attempt to refer, you "hope" the words you utter will do the trick -- you could also hope you're using the "right" words but I think that's secondary. Now what is the audience to do with your hope? How does that help them know what you mean?

    ((There's a reason sitcoms are full of this sort of stuff.))
  • Some questions about Naming and Necessity
    Referring is something done by fiat.frank

    Tell us what you mean by that, and why you think so.
  • Some questions about Naming and Necessity
    maybe a better way to understand this is "The man over there who I think has a glass of champagne in his hand." That way, the description is not wrongJ

    Unless it is. This is such a great example because the reference of the word "champagne" is regularly disputed. Are you using the word "champagne" "correctly"? Are you sure? Is there definitely a correct way?
  • Some questions about Naming and Necessity
    Reference is set by the speaker.frank

    I don't think it's that simple.

    In cases where the speaker is mistaken, memory being what it is, it is possible for them to learn what they are trying to refer to.

    (Example:
    "When Maddux was pitching the last game of the World Series --"
    "Maddux didn't pitch the last game; Glavine did."
    "Okay then Glavine. No, wait, I know I was thinking of Maddux, so maybe it wasn't the last game I was thinking of ..."
    And this can go on. It might turn out the speaker was remembering yet another pitcher he had mixed up with Maddux. It might or might not have been a World Series game.)

    The other problem is that even if we say the reference is whatever the speaker intended, besides the problems already suggested above, intention not always being perfectly determined, we have the additional problem that words don't just mean whatever you want them to. The speaker has no choice but to engage in the grubby business of negotiating with the audience to achieve successful reference.

    Grice noted the complexity of our intentions when we speak to each other, even in the absence of confounding factors: not only do I intend you to understand that I mean X by saying Y, I also intend you to recognize that I so intend, and I also intend you to recognize that I intend you to recognize that I intend you to understand I mean X by saying Y, and on and on.

    So, no, I can't agree that it's just a matter of the speaker "setting" the reference, as if the audience were superfluous.
  • Some questions about Naming and Necessity


    I would only add that "the one holding a glass of champagne" is said for the audience's benefit. I can look at someone and silently think "asshole" and I know who I mean, no hoops jumped through. If I gesture, for you, at someone and say "asshole", you might need clarification about which of the people over there I disapprove of. Hence "the one holding ..." or even "the one holding - what is that? Is that champagne?"

    Point being it's not exactly a matter (de dicto) of what the speaker thinks, but really of what the speaker thinks the audience will think. "The guy holding what you would probably think is champagne, but I saw the bottles and ..."

    I can't remember if Kripke gets there, and I'm not looking at the book, sorry. But reference is a matter of triangulation, not just what pertains to the speaker or pertains to what she speaks of.
  • The Phenomenological Origins of Materialism
    the job of the human sciences is not to explain but to interpret and understand.J

    Which is fine, I've just been avoiding committing to some major difference between the natural sciences and the human or social sciences, because I've been trying to clarify ― or insist upon or defend or something ― that there is some genuine continuity, that the political scientist is as much a scientist as the physicist. I'd like that point to come out similar to saying that a biologist is just as much a scientist as a physicist, which most people will agree to without a moment's thought, but I think it's obvious there are ways in which biology had a much harder time making progress than physics. We got the theory of evolution before genetics. We had the number of human chromosomes wrong ― even once we had a number ― until 1956.

    To your point, part of my point earlier was not to assume that what makes physics science was everything about physics, some of that may only apply to physics, or may only apply to the natural sciences. So I'd be open to saying even the expected results differ, that we want explanations from the natural sciences but interpretations from the human sciences. That may be. Where I've been hoping to link them is in the process enacted to produce whatever kind of knowledge they produce, all that business about careful procedures and communal self-correction. It wouldn't bother me if there were sciences about different things that produced different sorts of results, so long as they were producing those results using a process that would be recognizably science to a scientist in any field. That's awfully idealized, I know, but I think about even what a sociologist could tell a chemist about the care with which he collected his data and the statistical analysis he performed on it, and the chemist would recognize a brother scientist at work, even allowing for the great differences in their fields.
  • The Phenomenological Origins of Materialism
    I'd want to say that those tiny moments of musicality shouldn't be notated, even if they could be.J

    Agreed. I suppose I shouldn't have put it this way because I was thinking of the musicologist not the musician, someone who is analysing a performance rather than creating one.

    I'm of two minds in this talk of "having enough data" I keep using, here in talking about music or above talking about the social sciences.

    There's a great forgotten book called The Road to Xanadu by John Livingston Lowes (iirc) in which he traces every image, very nearly every phrase and every word, in two poems of Coleridge (Xanadu and Ancient Mariner) to sources in Coleridge's library. It's not an "explanation" of the poems; I believe the point Lowes made (and I may misremember) was that in a way knowing all this only deepens the mystery of Coleridge's creativity in taking all this material to create these things. It's not like you could train an LLM on Coleridge's library and then say, "Write me an astonishing poem," and out they would pop.

    (Coleridge being a particularly ripe case, as Eliot described him, a man visited by the muse for a while, and when she left, he was a haunted man. Coleridge himself didn't understand what had happened.)

    So, part of me does want to say that there can never be enough data to explain, much less predict, human action, and certainly not unlikely human action like creativity. The "human sciences" would then be marked either by arrogance or folly, as you like. I could be old school, I'm old enough.

    But I'm not convinced. That attitude strikes me unavoidably as a rearguard action, defending human nobility against a godless and disenchanting science, that sort of thing.

    Instead, I think it's simply a fact that the data needed, and the theory needed, are evidently beyond us, and so we must make do and aim a bit lower in our expectations, or at least be more circumspect in our claims. When it comes to scientifically informed debates over social policy, for instance, we sometimes know enough to do better, but still less than we think we do and so some caution is advisable.

    We know a lot of what was swirling around in Coleridge's head, but not all of it, and we know something about how his brain worked, because it worked like ours, but the specific historical process that took those inputs and yielded those outputs is unrecoverable.

    So it is with any musical performance. I'm inclined to say that one of the reasons the musician played this note this way is because of that time she wiped out on her bike when she was 8. That might be a big enough factor to make it into her biography ― if, say, she broke a finger that healed in a way relevant to her playing. It might be a kind of emotional turning point for her, if it nudged her attitude toward risk a certain way. It might be an infinitesimal factor, no more or less relevant than the peanut-butter sandwich she ate that day, but all of which went into making her the person who produced that performance.

    We're talking really about what God knows about her. When God hears that performance, does he smile slightly and connect it to that skid on her bike? God has all the data, so how does he understand the world and the people in it?

    It looks like that's the standard for science I have been indirectly endorsing, or if not "standard" then "ideal". Which is a little odd, certainly, but maybe that's fine. In practice, science is entirely a matter of making do, and being very clever about what you can learn and how despite not being gods. I guess.

    Therefore Bayes.
  • The Phenomenological Origins of Materialism


    Yeah I think there's a trick to that story, that it does mean it's too hard to sight-read.

    But then I also think about the difficulty of notating jazz correctly. And I think about Jimi Hendrix, who seems to add some tiny bend or flutter to almost every damn note -- how do you notate all those micro-decisions? And so it is with any great musician, there are all those millisecond decisions that go into the performance, all those tiny variations that distinguish a good performance from a great one.

    Now, should we say there is no hope of a scientific approach to great musicianship? I actually don't think so. I think the point is that vastly more data is needed than you might at first think, certainly more than you would think if you looked even at a complex score, which is great simplification of what a musician actually does.

    Any of that make sense to you?
  • The Phenomenological Origins of Materialism
    So to be clear, are you saying that science has to do with knowing-that, and non-scientific strategies for learning have to do with knowing-how? Even though there is some minor overlap?Leontiskos

    I'd say people quite often want to learn things that can be known, and when they reflect on how they're going about doing that, you have the beginnings of science. Recognizing that the first method that occurs to you, the natural or intuitive approach, might fail or produce unreliable results, and that taking some care up front, not just jumping in to slurp up facts as if they were just laying around, easily accessible to the laziest procedure, but planning an approach to learn what you want to know, that I would think of as the scientific impulse.

    That can happen anywhere anytime.

    For example, Ornette Coleman once said (I think this was in the liner notes to one of his early albums), it's when I found I could make mistakes that I knew I was onto something. We're talking here about how to play, and how to write, but it is also possible to have knowledge about what you play and what you write. Even if we, rightly, resist the philosopher's instinct to reduce knowing how to knowing that, we ought also resist excluding knowing that from knowing how.

    Further example, John Coltrane was a student of music theory. There are stories of him and Eric Dolphy with books spread out all over the living-room floor around them, discussing and analysing modes and scales for hours. Intense interest in knowing that. There's also a story that a young music student came to visit Trane once to interview him, and brought along a transcription she had made of one his solos. She asked him to play it, and after trying a couple times, he handed it back to her and said, "It's too hard." Knowing how is still its own thing, howsoever informed by knowing that.

    I guess all I'm saying is that "know" is a verb, so we're always talking about a how, whether it's knowing that or knowing how. Those are different things people do, but I think we know they are, and have to be, braided together continually. In science, the intent is to get the hows right so that you can produce thats reliably; in jazz, the intent is to take the thats you can get your hands on to improve your ability to how.
  • The Phenomenological Origins of Materialism
    2. All human errors stem from impatience, a premature breaking off of a methodical approach, an ostensible pinning down of an ostensible object. — Kafka, the Zurau aphorisms
  • The Phenomenological Origins of Materialism
    I wonder if there are really no true ontological positions, only methodological ones. It's not what is real, it's where and how do we look.T Clark

    I meant to say earlier, I quite like this idea.
  • The Phenomenological Origins of Materialism
    I think we have to actually grapple with the now-common belief that that the natural sciences are more scientific than the social sciences.Leontiskos

    To the hoi polloi, "science" seems mostly to mean "medicine", which is no doubt an interesting story. For my purposes, medicine is a good example because the human body is complicated and difficult to study, and so progress in learning how it works has been noticeably dependent on developing new technologies. And here we're still talking about natural science.

    When you turn to the social sciences, there are additional impediments to a scientific approach. The sciences of the past (history and archaeology) face unavoidable limitations on what can be observed. If instead you're studying the present, there can be difficulties with observation ― political science has to rely on polling, which presents enormous challenges, and other sources like voting data, which can be difficult to link with other sources of data, and still other sources like economic surveys. No one in the social sciences ever has nearly as much data as they would like, and what they would like is informed by theorizing that is perforce based on the limited data they can get. It's hard. You can design some pretty clever experiments in fields like psychology and linguistics, but economics and sociology are generally forced to make do with "natural experiments" (and in this they are more like astronomy and cosmology).

    In short, I tend to think social scientists are doing the best they can, and if we are right to have less confidence in their results than in the results of physics or chemistry, it's not because their work is less scientific, but a basic issue, first, of statistical power (lack of data), and, second, of the enormous complexity of the phenomena they study.

    Consider the fact that a very common objection to science-pluralism is that it would be unable to distinguish true science from pseudoscience (and the proponents of science-pluralism really do struggle with this objection). A pseudoscience is basically just a "science" which produces uncertain and unreliable "knowledge."Leontiskos

    I think honestly the similarities are only skin deep, and the processes of knowledge production in the two approaches differ dramatically.

    The pluralism I'm inclined to defend is twofold: one is Goodman's point about the sciences that are not physics getting full faith and credit; the other is the communal self-correction idea. The latter rests upon the simple fact that others are sometimes better positioned to see the flaws in your work than you are. That presents an opportunity: you can systematize and institutionalize scrutiny of your work by others. Two heads are better than one; two hundred or two thousand heads are better than two. There are some practical issues with this, well-known shortcomings in the existing peer-review process, for instance, but the idea is deeply embedded in the practice of science as I understand it, and I think it has proven its worth.

    Do you think there are non-scientific strategies for learning?Leontiskos

    Surely. Given the distinction between knowing that and knowing how, it stands to reason there's a difference between learning that and learning how. Acquiring a skill is a kind of learning that might here and there overlap with a scientific approach ― experimenting is what I'm thinking of ― but we would expect plenty of differences too, and the intended "result" is quite different.

    I think I'm okay with restricting science to a strategy for learning what can be known, and I also want to say it is something like the distillation of everything we have learned about how to learn what can be known. Science itself is a how, not a what. And that also means that we can learn more about how to learn things, so there's no reason to think the methodology of science is fixed.

    We're kind of going in every direction at this point, and I didn't even try to get to the "essence of science".
  • The Phenomenological Origins of Materialism
    science is not one method, nor is it a fundamentally different way of thinking from other forms of disciplined inquiry.Tom Storm

    I tend to think what matters most is that the enterprise is self-correcting, and it achieves that by being plural. The replication crisis is a great example of the scientific community's capacity to discover and address its own shortcomings.

    you seem to be saying that the natural sciences check more of our "science" boxes than the social sciencesLeontiskos

    I was trying not to say that, in fact, because any such list, with the intent of creating a scale of "scientificity", would be tendentious. Maybe it's silly, but it seems to me in some ways physics is easier than biology, which is easier than sociology. There are all sorts of issues of complexity and scale and accessibility (comparative ability to observe and measure). The story of physics itself moves from easy-to-make observations and measurements and relatively simple theories to very-hard-to-make observations and theories that are so complex their interpretation is open to debate.

    Roughly, I'm trying to say that I think it's a mistake to identify science with the methods that worked for the low-hanging fruit.

    the reason we approach different things differently is because they are different things. The reason we approach physics differently than mathematics is because of the difference between physics and mathematics.Leontiskos

    That's quite interesting. Mathematics is particularly troublesome, but I want to defend the view that there are approaches to the study of atoms and mountains and lungs and whale pods and nation states that are all recognizably scientific and scientific because of some genuine commonality, despite the differences which are unavoidable given the differences among these phenomena. That commonality might be more "family resemblance" than "necessary and sufficient conditions," but I lean strongly toward the mechanism of communal self-correction being required. I guess we could talk a lot more about all this.

    I'm going to hold off talking about pedagogy, but I'm glad you brought it up, because I think "learning" (as a concept at least) should be far more central to philosophy. This is my 30,000-foot view of science, and why I mentioned the importance of specifiable plans for further investigation above: science is a strategy for learning. That's the core of it, in my view, and everything else serves that, and anything that contributes to or refines or improves the process is welcome.
  • The Phenomenological Origins of Materialism
    us working hard to make senseTom Storm

    That's a lovely point.
  • The Phenomenological Origins of Materialism
    I'm just asking if you think some disciplines are more paradigmatically scientific than other disciplinesLeontiskos

    What if we left out "paradigmatically" in your question: are some disciplines "more scientific" than others? If you take "discipline" reasonably broadly, the obvious answer is "yes": writing poetry, for instance, is a discipline that, for the most part, does not even aspire to be scientific. Are you asking if some sciences are "more scientific" than others? Is physics more scientific than biology? Is biology more scientific than sociology?

    I'm having trouble imagining a reason to ask. It's clearly possible to make up an answer, to make a long list of characteristics of "science" and then count how many boxes each discipline checks. I think most of the natural sciences check whatever boxes you might come up with, and it wouldn't be surprising if the social sciences checked fewer, but it doesn't seem like a helpful exercise. It suggests that there is a difference due to the domain, when it's the approach that matters.

    Will one discipline provide a better starting point than another discipline, or not?Leontiskos

    I think not in principle ― not on account of something "especially scientific" about any given field ― but for pedagogical reasons, probably so. What would the students already have some familiarity with? What would most engage their attention? What would give them opportunities to participate and see for themselves ― to, in a fundamental sense, do science themselves?

    Maybe this is a variation on your question: isn't it the case that some domains are simply less suited to scientific study than others? Suppose you wanted to teach science and chose to begin with "the science of beauty", for instance ― how far would you get? I expect most of us would agree, not very far, but I don't think we have to dismiss the idea out-of-hand: why not explore and see if the process itself reveals the limits of what we can do here? ― Maybe this is the right point to mention that Goodman, in particular, insists that literature and the arts are not competing with the sciences and are not failing to meet a standard that is set by the natural sciences, but offer alternative frameworks for knowledge. (The word "knowledge" looks slightly odd there, but he would probably be fine with it.)

    I don't know ― is any of this in the ballpark of what your were looking for?
  • The Phenomenological Origins of Materialism
    Do you think it is appropriate to treat certain disciplines as paradigmatic sciences, such as physics or geometry?Leontiskos

    I don't really understand the question. "Appropriate" in what sense?

    Along the same lines, would the pedagogue be equally justified in starting with any discipline they like, if they wish to teach their pupil about scientific reasoning?Leontiskos

    I don't understand this question either. "Justified" in what sense?

    Truly don't know what you're getting at here.
  • The Phenomenological Origins of Materialism
    multiple realities, each intelligible through particular conceptual frameworks or perspectivesTom Storm

    It's the view Nelson Goodman defends in Ways of Worldmaking, and one consequence I found particularly appealing is that it puts you in a position to take seriously sciences which are not physics. Goodman argues that "reduction" is basically a myth, with no known exemplars. (It is true that physics constrains chemistry, which constrains biology, which constrains ethology, which constrains anthropology, but no one really thinks ― and there's no reason to think ― you could "explain" traditional religious practices in West Africa in terms of physics.) There is, on the contrary, no real reason for treating other sciences as "second class citizens" that might someday qualify as the real deal if you can show how they are consequences of physics.

    The alternative is to believe that there is only ever one thing to say, and anyone not saying that is wrong. But rather than see divergence as disagreement, it's possible in many cases to realize that it's only another perspective being offered. "But look at it this way ..." doesn't have to imply disagreement. Knowledge production is a communal enterprise.
  • The Phenomenological Origins of Materialism
    The former reflects a pragmatic stance, informed by an awareness of the limits of what can be knownTom Storm

    Agreed, but I would have thought "the limits of what we know how to investigate". At least that's how I think of naturalism; it's a program for further investigation that can actually be carried out. It may not get you everything that could be known -- how could anyone know that? -- but at least it's a definable plan for encroaching on the unknown.
  • How do we recognize a memory?
    Here I worry that bringing in "your mind" is one entity too many. Is this the picture?: An image occurs, my mind says it is a memory, and then some other item called "I" identifies it as a memory? Or when you say, "My mind said it was," does this just mean that I said it was?

    This kind of question does help us see how hard it is to work with a term like "mind". Do I want to identify "mind" with some psychological account of how images et al. get generated? Or would it be better to make "mind" equivalent to the "I", the self? Or is it this third activity that can mediate between the first two conceptions?
    J

    The intent of putting it this way was just to suggest that you might not ever be aware of entirely decontextualized (let alone "raw") bits of content. There's always some story to go along with it, however vague or incomplete or even inapposite that story might be. I really could have said "brain" where I said "mind', but I liked the sound of pitching it more at the level of function than mechanism.

    ― I will add that I have no idea how to talk about most of this coherently because I don't know what the purpose, even what the use of consciousness is, why we become aware of some of what the brain is getting up to.

    Here's a tiny example that just occurred to me in the last day or so, a phenomenon I was familiar with that I hadn't ever bothered connecting to my desultory reading about psychology. You're doing something which goes awry, say, closing a door awkwardly and it looks like you're about to pinch your fingers in it, and you just barely miss getting hurt but you say "Ow!' anyway. I've seen people do this in front of me, and everyone I've talked to about it has had this experience, the needless "ouch!"

    It's perfectly clear why this happens, psychologically speaking. Your brain is busy predicting future states of your body and preparing to respond to them, and forming and emitting words takes a little time so it doesn't wait until they're needed but prepares them a little ahead of time based on predicted or expected need. (Every human conversation shows signs of this.) When the moment of truth arrives, the needed "ouch!" is already on its way to being ejaculated, even if it turns out not to be needed.

    That means this "ouch!" is not quite the same as the automatic and involuntary scream of surprise pain. So what's "ouch!" for? I don't know, but my suspicion is that it is vaguely narrative supporting, either for your own consumption or others present, if there are any. "And then he pinched his finger in the door, and it hurt." It's a little label on the experience that drags along a little context, probably adds some little tabs that allow it to be in turn slotted into other, larger, probably narrative, contexts. It tells you what that moment means or could mean by telling you what it is or could be. Something like that.

    My suspicion was that these glimpsed images that flash through your mind arrive similarly with a suggested meaning or context and prepared a little to be taken up by other uses and contexts. So indeed tagged as a memory, but maybe weaker than that, offered as possibly a memory, and then we'll see if that holds when you (that is, your brain) do whatever you do with it. If it just goes on by, its status is left somewhat indeterminate, but if you do indeed treat it as a memory, next time it comes up it'll be more strongly suggested that this is a memory. (We know for a fact that this happens; Paul McCartney reports that he, like everyone else, had come to believe over the years that he broke up the Beatles, but that watching Peter Jackson's documentary brought back to him what it was really like, and everything that was going on then, and that it wasn't entirely his fault.)

    The experience of seeing image X and recognizing image X as, say, a memory, is simultaneous, and thus makes the experience different from recognizing image Y as a fancy. I'm not adding anything to some unlabeled or unrecognized image; it's all of a piece.J

    Right, I'm saying I doubt anything arrives unlabeled, whether that label is large and clear or small and hard to read, but that's not because the world itself is labeled but because your brain has a labeling process and you don't see anything until it's been through that process. You get them in consciousness at the same time, but I think they are still distinguishable because you can question their accuracy or usefulness separately.

    We can also go backwards now and note that to lay down a durable memory it has to makes sense. People have trouble remembering random bits of stuff, but stuff in sensible patterns they can. That suggests that there might always be some minimal gesture toward making sense of what's in your mind, in case you want to remember it, if it turns out to be important in some way, for instance. (An interesting variation on this is the Columbo method, in which you pay particular attention to details that seem out of place or inexplicable, to be missing part of the context in which they would make sense.)

    This is all just psychology, and, what's worse, psychology I'm mostly making up.
  • How do we recognize a memory?
    It's a question about my relation to, my experience of, how the mind works.J

    That's close to where I'd find room for philosophy, but it's tricky.

    Consider emotions. The average person is under the impression that an emotion wells up from within them more-or-less fully formed, and that it's a definite thing. What's interesting is that people in the post-Freud world also accept that they might misunderstand or misread or misinterpret their own emotions ― hence the sitcom joke of angrily shouting "I'm not angry!" But the assumption here is that there is a fact of the matter, in the sense that your emotion is something definite itself.

    Thing is, it probably isn't. We have whatever feelings we have for whatever reasons (that is, causes) and then "we" ― our minds ― construct for us a story in which we are angry or happy or whatever. The inputs for those stories are manifold, notably including social as well as internal elements, but there's no pure internal emotional state to be represented. Emotions are thoughts and constructed like all thoughts.

    Roughly speaking, my expectation is that what we're talking about is similar: along with the content of your awareness there's a little story, often quite vaguely sketched, about this being a memory or a fancy.

    So I think in a way there is an answer to "Why do I think this particular thought I was just having is a memory?" and the answer is because your mind said it was, or some perhaps much more subtle and noncommittal equivalent ― maybe your mind tested the waters a bit in suggesting this is a memory to see if you'd bite, if that characterization of the thought got any traction and we should carry on with that, or if not we should start hedging a bit, maybe eventually admit it wasn't memory at all.

    I think the story is probably very similar to emotion, because ordinary people have unearned certainty about both. We all know that memory is pretty much always confabulation, but most people are still convinced that when they remember something their memory is trustworthy; in the same way they are quite certain that their emotions are from deep inside, from their very essence as individuals, and not, for example, shaped to fit the social situation.

    So ― coming at last to it, I think ― when you talk about our relationship to our thoughts, I'm afraid a lot of that is already stuff the mind is getting up to. Always busily rewriting the story.

    You could, of course, give up talking about our experience of our thoughts and instead spend your time on our concepts of memory and imagination, but I think there's a middle way.

    It does, after all, often matter to us a great deal whether we really remember something. That's pretty interesting, that we should care so much about a distinction that isn't all that trustworthy. When we insist that we remember something, we are fundamentally making that up ― a thought just isn't definitely a memory or not, even if your mind strongly encourages you to think that it is. So why do we do that?

    So maybe your reason for posting was somewhere near here. We feel one way about a thought if we think of it as a memory, and another if we think of it as fancy. Even though those two toys came out of the same bin.

    So yes I would be up for examining what "memory" means to us, why it's so important to us to determine whether a thought is a memory (yours or mine), the role all these reflections and commitments play in our mental lives. But I doubt there's anything worth chasing that would turn out to be the "genuine experience" of memory rather than imagination, because I doubt there's any such thing. Still, we behave as if there is, and that feeds back into our mental lives quite powerfully.
  • How do we recognize a memory?
    When it happens, are you instantaneously aware, as best you can tell, that the thought/image is a purported memory? And if so, how?J

    The first question is, fundamentally, empirical ― not just about me, but in general: is this an experience people have? The second question is still empirical, because it falls squarely within the domain of (cognitive) psychology.

    ― ― If you want my take on the psychology, it's worth as much as you're paying for it: I would expect that thoughts are "categorized" on the fly, as needed, and only as much as needed, and that overwhelmingly this process of categorization is not something you do consciously. "At bottom" there's whatever makes it into your awareness, and that's just some bit of content, probably itself underspecified, and then there's what it gets taken as ― memory, fancy, perception, whatever. The content present might not get characterized to any particularly sharp degree, if it doesn't matter for the rest of what your mind is up to; if it matters, there might be some effort put into it. In short, I'd expect that the difference between memory and imagination is "constructed"; I'd say the same for perception, and I think there's reason to, but I suspect it's a slightly different process since there's enormous specialization for perception in the brain, which might make a difference. It is nevertheless true that people believe they see things that they are in fact imagining, and vice versa, so clearly the same applies here: the difference is negotiable, how something is categorized is not "what it is".

    And that's what I am gesturing at when I say that we don't consciously decide whether the content in our awareness is remembered or imagined; in some sense, yes, there's a decision being made about what it is, very much so, but I think that "decision" is mostly made without your conscious involvement. Obviously there will be exceptions.

    That's all just blather, though, my guesses based on my reading and that's all. ― ―

    Roughly speaking, I think none of this is any of philosophy's business. In the 18th century, before we could do the sort of research we can do now, it may have been acceptable to speculate about how the mind works and how we distinguish perceptions from memories and so on, but it's rather foolish in the 21st century.

    There are still some things for philosophy to talk about, I think, just not this, at least not in this way.
  • How do we recognize a memory?
    But the "Why?" of "Why do I identify an image as a (purported) memory?" is different -- unless we are thoroughgoing physicalists. We believe, generally, that an explanation here is going to involve some reference to reasons, to conscious activity.J

    This is the main thing I find so puzzling about your approach. (You seem to think it's phenomenology, and I think it's rather the opposite.)

    Remembering is much like breathing; we do it on purpose, some of the time, and automatically, almost all the time, and we never stop.

    That's "remembering", not "becoming aware of a thought and labeling it a memory". If that happens at all, it's probably rare, unusual at least. A thought, if it's a memory, comes to us as a memory, period.

    (And I think it must. Consider the alternative: what reasons could you muster to judge a thought to be a memory? What could you possibly rely upon as you worked out the inference that this indeed is a memory? It is the fundamental form of knowledge; you are already relying on memory when it occurs to you to do a bit of conscious reasoning. You've no hope of hauling memory before the tribunal of reason.

    Of course an individual memory is open to criticism, as being inaccurate or incomplete, whatever. But not only is there an obvious difficulty in establishing that a given thought is a memory, any steps you take will be entirely reliant on memory, so memory as such simply must escape judgment.)

    Now, if you want to ask, what's that like, for something to be present to the mind as a memory? Fine, and that's headed back toward phenomenology. (What we do, rather than how, as my deleted post had it.)

    But what we can't do is go looking for criteria that we consciously use to identify memories or distinguish them from other thoughts. There had better not be such criteria, because we couldn't know it and never apply them without already allowing memory to have its way.
  • How do we recognize a memory?
    here I am, here comes the mental item, here's me identifying it (seemingly instantly) as a purported memory.J

    Convince me that's either (a) not already a theory about how mental life works, or (b) it's a good theory, a reasonable theory.

    What has happened, or what have I caused to happen, to me? (Not so much "What has happened to cause this mental event?")J

    Yes, yes, but you seem to have the idea that the "mental item" might have causes, and those fall within the purview of psychology, but your identifying the mental item as a memory (or a fancy or a perception) does not, is not itself another sort of mental item, and does not fall within the purview of psychology. I can't imagine why you would think that. Surely identifying a thought as a memory is as much a psychological event as the thought so identified.
  • How do we recognize a memory?


    Oh I don't think so. There have been several interesting points raised (by you, @Dawnstorm, et al).

    No, my post was really just about methodology, because here's the thing: as posed, the question is about psychology. @J wanted to get away from that, but then you should really be asking different questions.

    And that would be worth doing, because memory is a very deep thing, it is the substrate of our mental lives, the medium within which thoughts grow, the object and enabler of perception and imagination, ... There's obviously a lot for psychology to say about all that.

    But what I find particularly interesting is the way memory can suddenly rise up and take control of the whole show, shouldering aside perception and imagination, all the mental work of keeping you alive. Memory refuses to stay in its place as the dependable foot soldier of thought. When the conditions are right (temperature and so on), a certain sort of breeze can throw me right back to my childhood.

    It is noteworthy how unbeholden to time our mental lives can be.
  • What is faith
    My guess is that ... — Evolutionary Naturalism and the Fear of Religion, Thomas

    And you ask me for evidence!

    A lot of empty chin-stroking. How you can take this seriously --
  • What is faith


    The evidence for what? For your assertion not applying to me?
  • What is faith
    The deeper dynamic of that is that secular philosophy is antagostic to the possibility of the transcendent because it is fearful that it might be real after all (compare Thomas Nagel's 'fear of religion'). Better to leave the whole question sealed.Wayfarer

    I expect I'll do as a representative secularist, and I have never in my entire life been afraid that one or another religion might turn out to be true.

    You (and Nagel, I guess) are just making this up.
  • What is faith
    My point is that the ought-claims of complete strangers have force for usLeontiskos

    What do you mean "us", kemosabe?
  • What is faith
    You might say, "I and everyone else on Earth share the value of wanting to avoid poisonous water, but that value is still arbitrary. Everyone on Earth may share the value, but that does not make it non-arbitrary."

    I don't see a need to enter into the debate on universal vs. objective. My point is that at least some values are shared by all humans, and this is all that is required for morality to exist. If this were not true then the complete stranger's warning would have no force for you. But it does have force for you, and therefore it is true that there are fundamentally shared values.
    Leontiskos

    I'm sympathetic, but this is patently false.

    If I want to die, I might very well seek out poisonous beverages.

    I have elsewhere argued at some length about the reasonableness of assuming that anyone you come across desires to continue living, absent evidence to the contrary. But it's still an assumption. I wouldn't call it an "arbitrary" assumption, anymore than I would call the desire to continue living "arbitrary". But neither would I call either of them universal, because they plainly are not.
  • More Sophisticated, Philosophical Accounts of God
    Congratulations. You are the first person to use "cromulent" on The Philosophy Forum.BC

    This is demonstrably false. (I suspect you were misled by the mobile version having search in two different menus.)

    Either you had never run across the word here before or had forgotten that you had; then ― perhaps ― you checked your memory using a faulty procedure.

    Either your experience of the forum was idiosyncratic, or you misunderstood and mischaracterized that experience yourself, and then ― whichever was the case ― you projected your understanding of your experience onto the forum as such, and everyone's experience of it.

    But I'm sure that's completely irrelevant to the thread topic ...
  • How do you know the Earth is round?
    I watched that second video and cannot see anything like that.flannel jesus

    It does work better as a companion to the first video.

    The basic idea is that he has his camera mounted on a jib so that he can raise and lower it. At maximum height, you can see the far shore of the lake, which is like 7 km away, I think; when you lower it closer to the water, the far shore disappears. It disappears because it is now below the horizon. That's the math he explains in the second video.

    I don't know what else to tell you.