• Olivier5
    6.2k
    It seems that your neurons are not good enough to understand what they themselves are saying...

    You explained that your neurons created an eliminative materialist model that looked good to your neurons, but that other neurons, e.g. mine, might create other models, which would not look good to your neurons but look good to mine.

    So your model is some kind of noise generated by your neurons, which sounded good to your neurons.
  • Isaac
    10.3k
    You explained that your neurons created an eliminative materialist model that looked good to your neurons, but that other neurons, e.g. mine, might create other models, which would not look good to your neurons.Olivier5

    Yep.

    So your model is some kind of noise generated by your neurons, which sounded good to your neurons.Olivier5

    If by 'noise' you mean something initially random that gets honed by selective pressure, then yes, it's possible.

    I'm not seeing the purpose of your line of enquiry. Are you just confirming your understanding of my position, or do you actually have a point? If the latter, could you just get on and make it.
  • Olivier5
    6.2k
    I'm not seeing the purpose of your line of enquiry. Are you just confirming your understanding of my position, or do you actually have a point? If the latter, could you just get on and make it.Isaac

    The point is that any model for the human mind needs to be compatible with the possibility of its own emergence as a possibly true model. Let us call this the reflexive challenge, because it is about human thoughts being in theory able to explain human thoughts.

    Unfortunately, eliminative materialism fails at this challenge because it literally ELIMINATES its own emergence as a possibly true model. The best shot you can arrive at is (in summary): "my neurons made some model of neuronal operation (eg Matter did it), which they kinda liked, and others will make other models (eg God did it) which their neurons will kinda like".
  • Isaac
    10.3k
    The best shot you can arrive at is (in summary): "my neurons made some model of neuronal operation (eg Matter did it), which they kinda liked, and others will make other models (eg God did it) which their neurons will kinda like".Olivier5

    How does that mean it couldn't possibly be true?
  • Olivier5
    6.2k
    How does that mean it couldn't possibly be true?Isaac

    It has the exact same chances of being true than any other neuronal noise, like @Wayfarer's or mine...
  • Isaac
    10.3k
    It has the exact same chances of being true than any other neuronal noise, like Wayfarer's or mine...Olivier5

    Well

    a) that undermines what you said "eliminative materialism fails at this challenge because it literally ELIMINATES its own emergence as a possibly true model". It obviously doesn't eliminate it if it has the same chance as any other model.

    b) why would it have the same chance? That assumes the processes you, I and Wayfarer are using to determine our preferences have equal chance of yielding a true result. Whatever attracts me to particular models might draw me more toward ones which are true than whatever attracts you to models. There's no reason at all to assume an equivalence.


    If you've got any clear line of argument, I'm happy to pursue it, but I'm not going to continue with this vague fishing exercise where you just spew out some half-baked critique hoping it'll get a few jeers from the back row.
  • Olivier5
    6.2k
    Whatever attracts me to particular models might draw me more toward ones which are true than whatever attracts you to models. There's no reason at all to assume an equivalence.Isaac

    For one, there's no reason to assume any particular truth because truth remains undefined in your model. What you spoke of was just better fit / control offered by some models, not truth. For two, whatever attracts me to particular models might draw me more toward ones which offer better fit than whatever attracts you to models. It cuts both ways, and thus the probability that your neuronal noises are right(er) is the same as the probability mine are right(er).
  • Isaac
    10.3k
    For one, there's no reason to assume any particular truth because truth remains undefined in your model.Olivier5

    I haven't even mentioned truth. Why would a model of how the mind works need to first define what 'truth' is? Di you ask all propositions to include a definition of what makes them true at the beginning. That's quite a ridiculous way to proceed.

    whatever attracts me to particular models might draw me more toward ones which offer better fit than whatever attracts you to models.Olivier5

    It might, we've not talked about that yet. Any two possibilities are equally likely prior to having any information about either, but we're not in that situation with the appeal of different models, they can certainly be analysed to give some probabilities, we just haven't done so. You were asking about the model, not the factors which attract me to it.
  • Alkis Piskas
    2.1k
    What I meant to say was our world, i.e. our worldview, is determined by how many words (read concepts/ideas) we know/understand. In other words vocab is a good index of the richness of a life. For example if you don't know or don't recognize nautical terms it means your world is limited to land, you're what sailors call contemptuously a landlubber.TheMadFool

    Of course words enrich one's world. This is only logical. But imagine someone who is reading tons of books, he is assisted by dictionaries and has the richest vocabulary on earth. However since he was disabled since infancy and confined to a chair, he has very little experience of the world. Imagine now someone who is semi-literate, yet he is out there in the world, working, travelling, enjoying life, etc. How their worlds would compare? Whose world would be richest and fullest (subjectively or objectively)?

    Re your example: One travels a lot and is very often at sea but has very little knowledge of nautical terms. Can you say that his world is limited to land?

    Considering now both my and your examples, can you still say that (the number of) words determine one's world?
  • Olivier5
    6.2k
    Why would a model of how the mind works need to first define what 'truth' is?Isaac

    It does, if it pretends to be possibly true.
  • Isaac
    10.3k
    It does, if it pretends to be possibly true.Olivier5

    Do you require this of all models then? If a physicist comes up with a new model of atomic decay do you say "that's all very interesting, but what is truth?"
  • Olivier5
    6.2k
    Do you require this of all models then? If a physicist comes up with a new model of atomic decay do you say "that's all very interesting, but what is truth?"Isaac

    He might have no particular problem with the default definition of truth, as the adequacy between a representation (or model) and what it attempts to represent (or model). But if he says something like that:

    [My] model is just a relation between the data from sensory receptors and the behaviour appropriate to it to reduce the uncertainty involved in any interactionIsaac

    ... I might start to enquire.
  • Isaac
    10.3k
    He might have no particular problem with the default definition of truth, as the adequacy between a representation (or model) and what it attempts to represent (or model). But if he says something like that:

    [My] model is just a relation between the data from sensory receptors and the behaviour appropriate to it to reduce the uncertainty involved in any interaction — Isaac


    ... I might start to enquire.
    Olivier5

    What I said is basically the same as what you claim the 'default' definition is.

    A 'true' model is one whose outputs (in terms of predictions usually) yield the expected results from assuming the model is true. My model of the pub being at the end of the road is 'true' if, when wanting to go to the pub, I walk to the end of the road and find it to be there as I would expect if my model were true.

    I'm not going to rehash the entire debate about qualia, perception, awareness, etc. Suffice to say I consider them to have presented a number of situations in which assuming a neural-based model of models has yielded the results we'd expect if that model were true.

    Not everyone prefers models which do this in a wide range of inter-subjective circumstances. Many people prefer familiarity to inter-subjective agreement, so, as long as the model works for them, they'd rather keep it even if people like scientists are finding the model doesn't work in the very specific circumstances they arrange for their experiments. Such a preference is less likely to yield a true model because it fits fewer data points.
  • Mww
    4.8k
    note the complete absence of talk about qualia in normal life. It's an artefact of philosophers. The all too frequent framing of the debate about such things as being 'common sense' vs. 'science' is nonsense, common sense wouldn't touch qualia with a bargepole either.Isaac

    Artifact....absolutely. Nonsense....absolutely. Barge pole....ditto.

    I don't think there's anything it's 'like' to be me.Isaac

    Agreed. What it’s like to be me, and “me”, are indistinguishable, which makes “what it’s like”, represented by qualia, utterly superfluous.
    ————-

    The main advantage science has is that it uses a lot of empirical data which is the sort of data we build our most treasured models aboutIsaac

    Agreed. Empirical data-based models relates directly to experience, and experience is the ground of all our empirical knowledge.
    ————-

    I'm comfortable with saying there's a mental state that could be called 'Thinking of ...', but it would have to be loose affiliation of states. I bet if you were 'thinking of' a daemon, you couldn't necessarily tell me how many toes it had, yet you'd surely say it had toes.Isaac

    Thinking of planets doesn’t imply any particular planet, so, no, the thought of a demon doesn’t say anything about its toes. It could therefore be said thoughts of things is a loose affiliation of states, in that the assemblage of properties belonging to the thing are each of them, a separate thought, hence a separate, additional, state of thinking. I could tell you how many toes, but I’d have to think of that number before I could assign just that quantity to the demon, and then tell you. Of course, I could just as well think “hoofed”, or “web-footed”.

    Wherein I take my first exception to your comments:

    One might be of the impression that when we 'think of' something we bring a picture of it to mind. That would be wrong, I think.Isaac

    I hold with the notion that human thinking is fundamentally predicated on images. Even while granting human mental images are not pictures in the truest sense, “I can see it in my mind” is precisely the general state of my mental machinations.

    Rather, we ready other parts of our mind in anticipation, we know the word for it, should we be called upon to speak it, we know the action for it (run, fight) should it actually appear, we know the things it's associated with... etc.Isaac

    “....But suppose that in every sensation, as sensation in general, without any particular sensation being thought of, there existed something which could be cognized a priori, this would deserve to be called anticipation in a special sense—special, because it may seem surprising to forestall experience, in that which concerns the matter of experience, and which we can only derive from itself. Yet such really is the case here....”
    (CPR A167/B209)

    I submit for your esteemed consideration, that that which could be cognized a priori, in constructing your “ready other parts of the mind in anticipation”.....is none other than an image we insert into the process, that serves as a rule to which the anticipated, must conform.

    The stereotypical physicalist will adamantly decry the notion of images, maintaining instead the factual reality of enabled neural pathways, which translates to memory recall. Which is fine, might actually be the case, but I still “see” my memories, and science can do nothing whatsoever to convince me I don’t.

    Lots of good stuff in your post, so thanks for all that.
  • TheMadFool
    13.8k
    Of course words enrich one's worldAlkis Piskas

    In other words vocab is a good index of the richness of a life.TheMadFool

    Clarification: Words don't enrich our lives as much as it's a marker of the breadth of one's experiences.

    Imagine a person who knows the names of each part of a motorcycle and someone who doesn't. Imagine also that the latter has clocked more bike-hours than the former. Now, think of an animal (linguistically challenged). Which of the two people is closest to being an animal? I'm beginning to wonder if animals have it better than us - should we dispense with the linguistic rigamarole and get down to business? Pleaaasse!

    By the way, you do realize this conversation is only possible because you have a certain level of command over the English language. Paradoxical! :chin: Hmmmm...Language committing seppuku!
  • Isaac
    10.3k
    Wherein I take my first exception to your comments:Mww

    I was doing so well...

    I submit for your esteemed consideration, that that which could be cognized a priori, in constructing your “ready other parts of the mind in anticipation”.....is none other than an image we insert into the process, that serves as a rule to which the anticipated, must conform.

    The stereotypical physicalist will adamantly decry the notion of images, maintaining instead the factual reality of enabled neural pathways, which translates to memory recall. Which is fine, might actually be the case, but I still “see” my memories, and science can do nothing whatsoever to convince me I don’t.
    Mww

    Actually, I get what you're saying here. I think rejecting pictures wholesale might have been a little too extreme on my part. Just as your motor cortices might ready themselves to run, your visual centres arranging themselves as they would in response to a tiger is, for all intents an purposes, an image of a tiger. I suppose what I was trying to say, if I dial back the superlatives, is just that it's not only an image. That when we say "I'm imagining a cat" there's not just an image of a cat in neural form, there's a whole readiness for 'cat' at least some of which helps us fill in the blanks where the image bit is not so clear. If I ask how many legs your imagined cat had, do you count them in the image, or do you just know that cats have four legs...? That sort of thing might be more what I'm working toward.

    I would, in turn though, take issue with "...to which the anticipated, must conform". I'm not sure I see the justification for such a hierarchy. Often, maybe, the picture takes precedent, but the olfactory process is shorter than the visual one, stimuli from there will reach the working memory before signals from the visual centres (which have a lot more work to do) so in a situation where imagining a scene might be olfactory and visual, the olfactory state is going to set the priors for the visual (by which I mean it will determine which model the visual centres will try to fit their data to first, discarding it only on utter failure). We are strongly visual thinkers, but our biology betrays us as mammals wired for scent and sound foremost. Humans, like Microsoft, have simply patched on some new programming over the old code without actually doing the rebuilding required to house it.

    Lots of good stuff in your post, so thanks for all that.Mww

    Likewise - always good to have someone coming from a different perspective to digest one's thoughts on a matter.
  • Olivier5
    6.2k
    My model of the pub being at the end of the road is 'true' if, when wanting to go to the pub, I walk to the end of the road and find it to be there as I would expect if my model were true.Isaac

    Correct.

    Suffice to say I consider them to have presented a number of situations in which assuming a neural-based model of models has yielded the results we'd expect if that model were true.Isaac

    That reads like mumbo-jumbo. I miss the part where anything mental gets "eliminated". Who are "we", if not some selves?

    The definition I'm using of eliminative materialism is the SEP one...

    Eliminative materialism (or eliminativism) is the radical claim that our ordinary, common-sense understanding of the mind is deeply wrong and that some or all of the mental states posited by common-sense do not actually exist and have no role to play in a mature science of the mind.
    — SEP
    Isaac


    Look, my intuition-based, ordinary model of myself and of my non-eliminated mind works really well. Why should I adopt another?
  • Isaac
    10.3k
    That reads like mumbo-jumbo. I miss the part where anything mental gets "eliminated". Who are "we", if not some selves?Olivier5

    The part where certain mental notions get eliminated is the entire canon of cognitive science for the last few decades, do you expect me to reproduce it all here?

    That it might read like mumbo-jumbo to you is not something I'm responsible for is it?

    my intuition-based model of myself and my non-eliminated mind works really well. Why should I adopt another?Olivier5

    Read back through our exchange. Who initiated, who questioned the reasonableness of whose position? It's not I trying to get you to reject your model, it's you trying to claim mine is unreasonable. You asked about my model and three short posts in launched in with...

    your eliminative materialist model is generated by neurons in your brain, like some sort of 'woo'?Olivier5

    ...and likening my work to...

    running around like materialist chickenOlivier5

    ...Don't now try to pretend it's me attacking your position. I really could not care less what position you hold, it's your claim that mine is unreasonable that I responded to. If you don't understand the arguments supporting it then your general derisive antagonism toward materialist positions gives me no incentive at all to resolve that.
  • Alkis Piskas
    2.1k

    Clarification: Words don't enrich our lives as much as it's a marker of the breadth of one's experiences.TheMadFool
    I gave you two examples to show you that words do not determine one's experience(s). I can give you a lot more, but I don't see the point. As I can see, you ignored them. So that's it for me.
  • TheMadFool
    13.8k
    I gave you two examples to show you that words do not determine one's experience(s). I can give you a lot more, but I don't see the point. As I can see, you ignored them. So that's it for me.Alkis Piskas

    You do realize that what you're saying is words are a waste of time, don't you? I'll leave you with that to ponder upon.
  • Olivier5
    6.2k
    The part where certain mental notions get eliminated is the entire canon of cognitive science for the last few decades, do you expect me to reproduce it all here?Isaac

    Isn't the entire canon of cognitive science part of what gets eliminated, and if not, why not?
  • Isaac
    10.3k
    Isn't the entire canon of cognitive science part of what gets eliminatedOlivier5

    What? I've absolutely no idea what's lead you to that conclusion so I can't even begin to answer the question. Eliminated how? It's a canon - a body of work - the only way it could be eliminated is by destroying all the copies.
  • Olivier5
    6.2k
    Why, the human mind gets eliminated, but not the productions of the human mind?
  • Isaac
    10.3k
    Why, the human mind gets eliminated, but not the productions of the human mind?Olivier5

    Who said anything about eliminating the human mind?
  • Olivier5
    6.2k
    What exactly do you attempt to eliminate in your "eliminative materialism"?
  • Isaac
    10.3k
    What exactly do you attempt to eliminate in your "eliminative materialism"?Olivier5

    We've been through this - things like qualia, consciousness (in the sense of 'what it's like'), emotions as natural kinds, essences and forms (in the Platonic sense). I bolded it in the quote right at the beginning of our discussion - eliminative materialism includes the claim that some mental terms have no proper referent, I subscribe to that view. I don't subscribe to the view that no mental terms have proper referents, but I do think those referents are the same thing as neural states. That's not the same as saying the things don't exist. Denying them a separate existence from their material substrate is not denying them an existence tout court.
  • Olivier5
    6.2k
    Okay so your version only eliminates the good things in life, like tastes and smells and music, but not some tasteless, emotionless version of mental life. As long as it happens in a lab.
  • Isaac
    10.3k


    Yes, that's right. I want to ban tastes and emotions. I don't know why I didn't think of putting it that simply in the first place.

    What a waste of time.
  • Olivier5
    6.2k
    You're not much of an eliminativist...
  • RogueAI
    2.8k
    What exactly do you attempt to eliminate in your "eliminative materialism"?
    — Olivier5

    We've been through this - things like qualia, consciousness (in the sense of 'what it's like')
    Isaac

    Isaac, are you claiming "what is it like to go sky-diving?" is something that needs to be eliminated? Or is nonsensical? Surely you've been asked "what is it like" questions by people before. How do you respond to them? Do you just ignore the question?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment