• fdrake
    6.6k
    Though I should say, I have (from reading the papers you cited) some grave concerns about the route Chalmers takes to get here (if here is indeed where I think he is - I suspect my ability to understand what he's on about is substantially less than yours). I'm not sure that the modality is actually a viable approach if he's trying to get at the way we actually think. There's too little scope in a kind of 'this else that' model where I think It's more 'this until further notice', but I may have misunderstood.Isaac

    I'm not comfortable with the use of modality either, though I'll put my charity hat on. I'll assume that at least part of this discomfort regarding modality is rooted in the idea that models of neural networks don't seem to compute possibilities, they tend to compute probabilities.

    For a reader that isn't clear on why that matters, something can have probability 0 and still be possible. Like picking the number 2 randomly out of the integers. Further, assigning possibility to a state value given another state value is a much different idea than assigning probability to a state value given another state value, the latter shows up in neural networks, the former doesn't.

    Anyway, I'll put on my charity hat for why it's okay.

    <Charity>

    I don't know if employing modality as Chalmers does for studying the mind is really aimed at questions of how the mind works - like descriptions of processes or attempts at modelling the modelling process. I think it's aimed one level of abstraction higher - on the types of descriptions of processes or 'modelling the modelling process' attempts which could in principle make some kind of sense. EG epistemic content is posited as a type of mental content, not any specific means of ascribing epistemic content to an intentional state. The specific means that someone attains epistemic content of a specific configuration in practice are left uncharacterised in the paper. To my mind he had to introduce parameters for the scenarios he analysed to suggest the appropriate content. 'water' and 'xyz' as names in the twin earths one with water, arthritis and tarthritis in the other one.

    So I don't think Chalmers needs to be judged on whether his papers produce descriptions which cash out in procedural descriptions of people's perception process and how consciousness works in general, I think they ought to be judged on the crucible of whether they're tracking the type of entities which are successful in those theories.

    </Charity>

    I'm not really convinced by that reasoning, but there you go.

    Were there other reasons you thought that employing modality as Chalmers does might not be okay?

    So, there's these strong connections which neuroscientists (to my knowledge) have yet to fully work out the function of between early areas of sub-conscious cortices and the hippocampus, an example might be the V2 region of the visual cortex. Usually a connection to the hippocampus is involved in consolidation of some memory, so it seems odd that such early regions would be strongly tied to it. One idea is that there's some higher level modelling suppression going on even in these early centres, like - 'is that likely to be an edge? Let me just check'. I think (though I can't lay my hands on any papers right now) there's one of these connections into the cerebellum too.Isaac

    This is super cool. Paper I linked seemed to indicate something similar to that, eg the microsaccades having directional biases towards required coloured stimuli. Assuming that the content of the attentional template of a microsaccade has its information being passed about the brain in the way you mentioned, anyway.
  • Janus
    16.3k
    You can tell what Terry Pratchett referred to as 'lies to children' about the content of saccades in terms of propositions, though. EG, someone might 'look at a chin to provide more information about the orientation of a face', but there's no conscious event of belief or statement associated with what the saccade's doing at the time, the speech act which associates the propositional content with the saccade is retrospective.fdrake

    Right, so to say that saccades are driven by beliefs seems to stretch the meaning of the term too far. See below, as to why I think it would be more apt to speak of saccades in terms of expectation or anticipation.

    What do you see as the difference between the two?Isaac

    I think of beliefs as being more obstinate than expectations. For example say I always put my keys in a particular place; then I 'automatically' expect them to be there even though I know that sometimes I fail to put them there. On the other hand if asked whether I believe they are there I might say 'no' because I acknowledge I might have put them somewhere else, someone might have moved them, and so on.

    For contrast, if someone asked me whether my fridge is where it usually is in the kitchen, I would not merely expect it to be there but I would positively believe it to be there, even though I know there is a very tiny chance that it's not.
  • Andrew M
    1.6k
    our model of (some part of) the world and the world we are modeling sometimes match up. I think we essentially agree.
    — Andrew M

    Yeah. We have a vested interest in them matching up, not just with the world, but (and this is the really important part, for me) with each other's models. In fact I'd be tempted to go as far as to say that it's more important that our models match each others than it is they match the state they're trying to model. I'm pretty sure this is main function of many language games, the main function of social narratives, the main function of rational thought rules. To get our models to match each others.
    Isaac

    Certainly it's important for communication and co-operation. But it's worth noting that we have language not just for agreement and disagreement (i.e., whether our model matches up with other people's models), but also for being correct and mistaken (i.e., whether our models match up with the world we are modeling).

    Consider a Robinson Crusoe on a deserted island who doesn't communicate with anyone. Mistakes in his world modeling can be costly (nope, no precipice there...)
  • Isaac
    10.3k
    Hold up. What do you mean by "conscious" here? What is a worm missing that it would need in order to be conscious?frank

    I meant 'conscious of...'

    Not that you couldn't ask the same question there too, but I'm really just trying to get at whatever distinction you're applying to 'intent' that you think a sub-conscious process couldn't satisfy the definition. You want to reserve the word for some types of directed behaviour but not others, right?
  • Isaac
    10.3k
    at least part of this discomfort regarding modality is rooted in the idea that models of neural networks don't seem to compute possibilities, they tend to compute probabilities.fdrake

    Were there other reasons you thought that employing modality as Chalmers does might not be okay?fdrake

    No, the above pretty much covers it. It sounded too much like the 'other' option was part of the decision-making process, as if it's likelihood (or lack of it) helped determine the decision to maintain the prior, and I don't see that being the main case. Like the prior might be 'that's a dog' and somehow establishing that 'well, if it's not a dog then it must be a unicorn and that seems very unlikely' helps decide that it is, in fact, a dog. I can see situations at higher level processing where that might be the case, but rarely, and even then it's still falls under the general case of 'have I got any reason not think that's a dog?', which seems to me to be a better way of expressing Bayesian inference than the modality Chalmers introduces.

    I'm not really convinced by that reasoning, but there you go.fdrake

    Yeah, me neither, but good effort, it's good to treat the positions of others as charitably as possible (seems in rather short supply these days). I was thinking that a sort of modal tree could be built where options were gradually eliminated at each branch such that we could maintain the rest of Chalmer's model. A kind of one-by-one checking to see if there's reasons not to hold the prior, but that seemed too much like unnecessarily elbowing his work into my preferred model...

    the microsaccades having directional biases towards required coloured stimuli. Assuming that the content of the attentional template of a microsaccade has its information being passed about the brain in the way you mentioned, anyway.fdrake

    Yeah, that's the idea. All very much to play for though in terms of this cashing out in what these links actually do (although I'm not bang up to date on this anymore).
  • Isaac
    10.3k
    I think of beliefs as being more obstinate than expectations. For example say I always put my keys in a particular place; then I 'automatically' expect them to be there even though I know that sometimes I fail to put them there. On the other hand if asked whether I believe they are there I might say 'no' because I acknowledge I might have put them somewhere else, someone might have moved them, and so on.Janus

    Interesting, thanks. I've run into a lot of trouble for lack of a full grasp of just how many different ideas there are of what 'belief' means. I've been in something of an echo-chamber in terms of the working definition of belief, and talking about these ideas in a wider community has proven problematic on that account.

    it's worth noting that we have language not just for agreement and disagreement (i.e., whether our model matches up with other people's models), but also for being correct and mistaken (i.e., whether our models match up with the world we are modeling).

    Consider a Robinson Crusoe on a deserted island who doesn't communicate with anyone. Mistakes in his world modeling can be costly (nope, no precipice there...)
    Andrew M

    I can definitely see the need for our models to at least be consistent with the world (they don't have to match, just work), but I don't see a role for language in that. Are you thinking of the link between grammar, naming, and though enabling?
  • frank
    15.8k
    I meant 'conscious of...'Isaac

    Well you said "conscious species." You can't be a functionalist and use that kind of language. I'll put you down for non-reductive physicalism.

    Not that you couldn't ask the same question there too, but I'm really just trying to get at whatever distinction you're applying to 'intent' that you think a sub-conscious process couldn't satisfy the definition. You want to reserve the word for some types of directed behaviour but not others, right?Isaac

    "Directedness" just sounds teleological. At the chemical level we just need chemicals and no purposeful events. So maybe "directedness" can be jargon for a bunch of totally undirected events.
  • Isaac
    10.3k
    Well you said "conscious species." You can't be a functionalist and use that kind of language.frank

    I define 'conscious' (earlier in this thread, even, I think) as a process of logging certain mental states to memory. So I think I can be functionalist and still use that language (if I wanted to be functionalist, that is). I used to be flat out behaviourist, in fact. You wouldn't recognise me in my earlier work.

    "Directedness" just sounds teleological. At the chemical level we just need chemicals and no purposeful events.frank

    Yeah, maybe. But all those chemicals are instructed by a mind which is itself several models which have a function. I don't have any problem in saying that the purpose of the printer cable is carry information from the computer to the printer, or that the purpose of some sub-routine in the program is to translate the key inputs into binary code. None of these has purpose as a system isolated, but it has purpose as part of the larger machine. It has a purpose given the purpose of the machine of which it is part.
  • frank
    15.8k
    I define 'conscious' (earlier in this thread, even, I think) as a process of logging certain mental states to memoryIsaac

    Ok. Good to know.

    So I think I can be functionalist and still use that languageIsaac

    A worm demonstrates functions of consciousness. You'd need to go ahead and allow consciousness all the way down. That's not an unusual stance.

    Yeah, maybe. But all those chemicals are instructed by a mind which is itself several models which have a function. I don't have any problem in saying that the purpose of the printer cable is carry information from the computer to the printer, or that the purpose of some sub-routine in the program is to translate the key inputs into binary code. None of these has purpose as a system isolated, but it has purpose as part of the larger machine.Isaac

    I know a tad about computer architecture. There are no purposes in there. You can allow that there are if you get neo-Kantian about it, maybe?
  • fdrake
    6.6k


    As a rather selfish request, can you please provide more words and citations for these positions. By the sounds of it you're writing largely from Chalmers' perspective on things? To my knowledge it's rather contentious that consciousness goes 'all the way down' with functional properties if one is a functionalist. It's also ambiguous whether you're using 'all the way down' to refer to a panpsychism or whether bodily functions are conscious 'all the way down'.

    More words please.
  • frank
    15.8k
    As a rather selfish request, can you please provide more words and citations for these positions.fdrake

    Ok. I'll try.

    By the sounds of it you're writing largely from Chalmers' perspective on things? To my knowledge it's rather contentious that consciousness goes 'all the way down' with functional properties if one is a functionalist.fdrake

    No. It's not Chalmers. I don't think Chalmers lays out a definition for consciousness, except that whatever it may be, it needs to include phenomenal consciousness (in the case of humans anyway).

    "All the way down" is a quote from some article. Sorry I can't cite it. But if a functionalist says consciousness is identical to function, but excludes worms, they'd need to explain why.

    It's also ambiguous whether you're using 'all the way down' to refer to a panpsychism or whether bodily functions are conscious 'all the way down'.fdrake

    No, "all the way down" zoologically speaking. Anything with functions of consciousness.
  • Banno
    25.1k
    I baulk at equating each neural network with some attitude towards a proposition.
    — Banno

    Why is that?
    Isaac

    Mostly because such propositional attitudes are so mercurial. My belief that the door is closed is manifest in so many different ways - I would have to open it to go outside, the cat cannot get out, the air in here might improve if I open the door, the air conditioner is not needed, the breeze outside is not coming into the house, the light is sufficient for me to see around the room, and so on. How would a single neural network map against all these possibilities?

    More formally, there's Davidson's question concerning the scientific validity of any such equation. Suppose that we identify a specific neural network, found in a thousand folk, as being active when the door is open, and hence conclude that the network is roughly equivalent to the propositional attitude "I believe that the door is open". If we examine person 1001, who claims to believe that the door is open, and do not find in them that specific neural network, do we conclude that we have not identified the correct specific network, or do we conclude that they do not really believe the door is open?

    I think this argument shows that there is a difference in kind between proposition beliefs and neural nets that mitigates against our being able to equate the two directly, while admitting that there is nothing much more to a propositional attitude than certain neural activity. That is, it is anomalous, yet a monism.

    My apologies for this not being clearly expressed. The problem is somehow reconciling Mary Midgley and neuroscience.
  • AgentTangarine
    166
    But if a functionalist says consciousness is identical to function, but excludes worms, they'd need to explain why.frank

    If you exclude worms from the function of consciousness to see the world, what's to be explained. Note that the function of consciousness (to see the world) is not the same as explaining it. You need consciousness to walk around. But that's no explanation.
  • frank
    15.8k
    If you exclude worms from the function of consciousness to see the world,AgentTangarine

    Maybe replace "see the world" with "light sensitivity.".
  • AgentTangarine
    166
    Maybe replace "see the world" with "light sensitivity.".frank

    Light sensitivity is a function of the eyes. Seeing is a function of consciousness. You need it to see worms. To see the world. But you can exclude the worm or the world. Focus on consciousness alone.
  • frank
    15.8k
    Sorry, I don't know what you're saying.
  • fdrake
    6.6k
    I think this argument shows that there is a difference in kind between proposition beliefs and neural nets that mitigates against our being able to equate the two directly, while admitting that there is nothing much more to a propositional attitude than certain neural activity. That is, it is anomalous, yet a monismBanno

    Nicely done. Do you believe that's reflective of Davidson's position (since he proposed that term "anomalous monism"), or is it more Banno's than Davidson's?
  • Banno
    25.1k
    Well, it might best be described as Banno's understanding of Davidson. Then if I have it wrong it's not his fault, and if I have it right he can take the credit.
  • AgentTangarine
    166
    Sorry, I don't know what you're sayingfrank

    Ecactly what I wrote. It's plain English. You can exclude the worm and still see it with closed eyes and imagination.
  • AgentTangarine
    166
    If we examine person 1001, who claims to believe that the door is open, and do not find in them that specific neural network, do we conclude that we have not identified the correct specific network, or do we conclude that they do not really believe the door is open?Banno

    Are the other 1000 all the same, under the same conditions? If he believes the door is open, why should his network be different? Is he lying?
  • AgentTangarine
    166


    How can his network be different if he believes the door is open? Is he lying? Have we wrongly identified his network? If he's not lying then his believe part should be the same.
  • Banno
    25.1k
    Yes, that's right.
  • fdrake
    6.6k
    ↪fdrake Well, it might best be described as Banno's understanding of Davidson. Then if I have it wrong it's not his fault, and if I have it right he can take the credit.Banno

    :up:

    Any chance I could get you to zoom in on this bit:

    More formally, there's Davidson's question concerning the scientific validity of any such equation. Suppose that we identify a specific neural network, found in a thousand folk, as being active when the door is open, and hence conclude that the network is roughly equivalent to the propositional attitude "I believe that the door is open". If we examine person 1001, who claims to believe that the door is open, and do not find in them that specific neural network, do we conclude that we have not identified the correct specific network, or do we conclude that they do not really believe the door is open?Banno

    What is it about the concept of a neural network that makes it seem very specific and individually variable?

    What challenge does the individual variability of neural nets play with the equation of a belief state with a type or behaviour of a neural net? (Token vs type identity questions here. )

    Lastly, I'm really interested in where you got those two possibilities from in:

    If we examine person 1001, who claims to believe that the door is open, and do not find in them that specific neural network, do we conclude that we have not identified the correct specific network ( possibility 1 ), or do we conclude that they do not really believe the door is open (possibility 2)?

    Why is it the case that possibility 1 and possibility 2 seem to follow from the hypothetical scenario, and are there other possibilities?
  • Banno
    25.1k
    I'm simply not well enough versed in neural nets to answer your questions.

    The two possibilities... I used the same argument a few years ago to the ends of demonstrating that neural science had an issue with falsifiability. I think it was in a discussion with @Isaac, who showed that there might be sufficient nuance in the descriptions of networks to allow for such eventualities; that the details might well provide an out. The proper attitude is to wait and see what Isaac and his friends can come up with.

    But perhaps I have not understood your question?
  • Banno
    25.1k


    Another reason to baulk. Propositional attitudes have a linguistic structure, attitude:proposition. In this sense they can be subject to algorithms. Neural networks are not algorithmic - one cannot set out in a step-wise fashion what is going on in a neural network as it solves a problem.

    That is, there seems to be a basic difference in their logical structure.
  • Isaac
    10.3k
    A worm demonstrates functions of consciousness.frank

    Maybe. I'm not trying to draw a line here, I'm trying to establish the line you want to draw. The type of thing/process 'intention' is reserved for.

    I know a tad about computer architecture. There are no purposes in there.frank

    Yet "what's the purpose of that wire?" is not an incoherent question.
  • Isaac
    10.3k
    Mostly because such propositional attitudes are so mercurial.Banno

    It might surprise you then to learn how mercurial neural networks are. The concept is known as redundancy and degenerative architecture (or just degeneracy). Excellent primer here. Basically, neural networks do not seem to be restricted to carrying out (representing) only one task (mental state), but rather can carry out different (usually related - but not always) tasks, in sequence. So there's plenty of room, no worries on that score, but... On what grounds are they 'the same' or 'different' tasks then? Which I think is where you're going with...

    If we examine person 1001, who claims to believe that the door is open, and do not find in them that specific neural network, do we conclude that we have not identified the correct specific network, or do we conclude that they do not really believe the door is open?Banno

    ...yes?

    I'd agree with you here. I think we make a mistake if we model beliefs only in a single brain. Beliefs (for me) are a relationship between the state of some neural network (a snapshot of it), the world it's trying to model, and (for us anyway) the relationship between that system (neural state-world) and other people's systems toward the same part of the world. How do we know it's 'the same part of the world'? I think the system is iterative and constructive, so it's collaboratively built, me-society-world, we create a unified model using the consistency of the world's hidden states in the same way as one might use a primary key in a database, to link the models we each have.

    So my belief that the door is open cannot be just a state of some neural network (though it is encoded in such a state - which is where I think I misunderstood your position previously), it's a relationship between a model I have (which is a neural network), and the hidden state of the world (the door), and the models other people have (neural networks), of what we assume (by constant experimentation and communication) is the same hidden state, the primary key by which we create social constructions like 'door'.

    So I don't see a problem with saying that my belief is a model (a neural network) in my brain, but we get stuck when we want to say what it is a belief that..., it's just a belief (in my brain) a tendency to act some way or other. But to get a belief that... we need both a world toward which that belief intends to act, and a social construction (lots of us all trying to get our models of the same hidden states to vaguely match) to fill out the ellipses.

    Non-social creatures, of course, only have the first part, or the second is of trivial importance. I'm not wedded to the idea of calling what they have a belief, but I'm not averse to it either.

    My apologies for this not being clearly expressed.Banno

    No worries, I bet a random sample of readers would understand your post to a considerably greater degree than my response!
  • AgentTangarine
    166
    A person can hold the believe that the door is closed and claim it while the door is closed.
    Her neural network will differ from the people who rightly believe and claim that the door is open. What is rightly believing? Depends on what you call the right neural network for believing something is the case.
    Humans here are reduced to NN. You can just as well ask the people without looking at their NN.
  • frank
    15.8k
    Maybe. I'm not trying to draw a line here, I'm trying to establish the line you want to draw. The type of thing/process 'intention' is reserved for.Isaac

    You did draw a line, but I think you're back on track now. Worms are conscious.

    Then:
    Directedness" just sounds teleological. At the chemical level we just need chemicals and no purposeful events.
    — frank

    Yeah, maybe. But all those chemicals are instructed by a mind which is itself several models which have a function.
    Isaac

    Chemicals are instructed by a mind? What?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.