• Jack Cummins
    5.1k

    The aspect which we do know is that there is some degree of shared experience, like the sky being blue. Obviously, it is not possible to know another person's actual experience of what this blue is but if people paint it there is likely to be some similarity with what this blue looks like, even to the point of knowing which other colours to mix with it.

    However, there are the unique elements to perception which are represented in art, like the swirls in Van Gogh's paintings and the composition of dots in Seurat's. This may be where the emotional or mental states come into the variable nature of what is perceived. There is an external world to be perceived but it is filtered through the lens of perception, and the concept of qualia, even though abstract, may pinpoint the tensions arising in this.
  • frank
    14.7k


    Experience is multi-faceted. That's true.
  • frank
    14.7k
    And yet we hear talk of such spectres as the 'red quale'. Does the 'red consciousness' make any sense at all?Isaac

    No. ?
  • Banno
    23.5k
    The difficult relation here is between intentions and world-modelling. Both say "The world is this way", one in terms of intentional attitudes towards propositions and the other in terms of neural networks. I baulk at equating each neural network with some attitude towards a proposition.

    I find myself back at some form of anomalous monism - the view that there is some such equivalence but it is cannot be specified. .
  • Banno
    23.5k
    I am inclined to think that the idea of qualia is useful to some extent, but with some limitation, in the way in which it can become a bit of a knotty tangent at times.Jack Cummins

    Pretty much right. The problems arise as one asks folk to specify were and how qualia are of use. For example:
    There was a lot of interesting analysis of art and music based on qualia as colour scales and pitch and tone and time scales etc. Prall, Goodman, Boretz. But there, there was no philosophical bias, no claim of epistemological priority. It was just a matter of starting the analysis with those elements.bongo fury
    The question is, how do qualia improve the analysis in a way that is not just as clear from a discussion of colour scales and pitch and tone and time scales etc... And this is the part that never receives a clear answer. The usual approach is to allude to their being useful without actually saying how - as here with a couple of citations and some passive aggression.
  • Jack Cummins
    5.1k


    One aspect for which I think that qualia has specific bearing is understanding witness accounts and statements. I wondered about the nature of this before I had even come across the puzzle of qualia. In critical incidents people are often asked to give witness reports and I have been in a few situations where I had to give accounts or look at witnesss statements. There were great discrepancies between accounts, including perception of sequence of events and descriptions of people. It means that working out a clear picture of events can be difficult and part of the issue may be that people perceive the exact same events but uniquely. The emotional impact of stress and heightened emotions may play a major role in this.

    My own understanding of the importance of the idea of qualia is that it may throw some light on these kinds of difficulties, but not that it completely eradicates them. I guess my other interest in qualia is the bearing which it has on aesthetic appreciation in the arts.
  • Banno
    23.5k
    What is it you think talk of qualia adds here?
  • frank
    14.7k


    I don't think it adds anything. There are numerous ways to distinguish experience from function. If "qualia" makes things confusing, drop it. The music was in response to "passive aggressive."
  • bongo fury
    1.6k
    The question is, how do qualia improve the analysis in a way that is not just as clear from a discussion of colour scales and pitch and tone and time scales etc...Banno

    That's all they were, for Goodman at least. Classes of stimuli. Sound events and illumination events. But classified through human aesthetic judgement and culture, rather than physics. And hence free from all the spurious distractions of "red-as-a-wavelength" etc. And thus answering your question.
  • Banno
    23.5k
    but the question was why do hallucinated spiders look like real spiders. How do you explain the behavior of someone hallucinating without "silly" qualia? How is it that something that isn't real looks like something that is unless they both take the same form (qualia)?Harry Hindu

    All of this can be put simply a "Spider hallucinations look like spiders" - no use of "qualia"!

    What's relevant about an hallucination of a spider is that thereis no spider. Hence, as you point out, characterising some event as an hallucination presumes realism.

    Denying realism isn't denying what is real. It just changes the reference to what is real.Harry Hindu

    To be sure, realism is the view that there is stuff in the world that is independent of the mind, so the claim that what is real is stuff in the mind would not count as realism.
  • fdrake
    6k
    No, not at all, but it's what I was getting at with my clumsy introduction of stochastic resonance. What's inside or outside any Markov blanket is not necessarily the same as indies or outside a skull. That's true of our sensory receptors (for whom their first 'inside' node os actually outside the body) and it's true for our internal models (which may have nodes outside their Markov boundary - my stochastic resonance example - but inside the brain)Isaac

    I think I'm getting there now, thank you for the clarifications. I wanna put a note here about how this maybe intersects with the externalism/internalism debate on mental content.

    I hope this helps contextualise some things, even if it's mostly a catalogue of ambiguities.

    Broadly speaking, someone is an internalist about X if they believe X only is determined by/depends upon the body or mind of the individual which bears X. Like someone is an internalist about colour if they believe colour depends only upon the individual which sees it, literally 'it's in the eye of the beholder'.

    Someone is an externalist about X if they they believe X is not only determined by the body or mind of an individual which bears X.

    People can be internalist about an X in a few different ways: they might restrict the notion of dependence in some way, to allow X to be determined 'only' by the body mind in one flavour of dependence or in one subclass of the class X. EG, someone like Chalmers is an internalist about mental content, but only about a specific type of mental content (narrow content) which he construes as determinative of psychological states. Even though he believes the content of some types of mental content are 'broad' - depend upon things which aren't an individual's body or mind.

    Maybe the first thing to do in terms of this debate is reframing it in terms of vehicle externalism and content externalism to nip an ambiguity in the bud:

    Vehicle externalism, more commonly known as the thesis of the extended mind, is externalism about the vehicles of mental content. According to the thesis of extended mind, the vehicles of mental content—roughly, the physical or computational bearers of this content—are not always determined or exhausted by things occurring inside the biological boundaries of the individual. — SEP

    The distinction between states and acts is, in the context of this form of externalism, a significant one, and the general idea of extended mind can be developed in two quite different ways depending on whether we think of the vehicles of content as states or as acts. Thinking of the vehicles of content as states leads to a state-oriented version of extended mind. Thinking of these vehicles as acts leads to a process-oriented alternative. — SEP

    Mental content is not free-floating. Wherever there is mental content there is something that has it—a vehicle of content. Mental states (belief, desires, hopes, and fears, etc.) are natural candidates for vehicles of content. So too are mental acts (believing, desiring, hoping, fearing, etc.). As a rough, initial approximation, extended mind is the view that not all mental states or acts are exclusively located inside the person who believes, desires, hopes, fears, and so on. Rather, some mental states or acts are, in part, constituted by factors (e.g., structures, processes) that are located outside the biological boundaries of the individuals that have them. Thus, extended mind differs from content externalism not merely in being about mental vehicles rather than mental contents, but also in being committed to a claim of external location rather than simply external individuation. If extended mind is true, some vehicles of content are not, entirely, located inside the biological boundaries of individuals that have them. Rather, they are, partly, constituted by, or are composed of, factors that lie outside those boundaries. — SEP

    If we're going to posit that the Markov blanket of an individual's perception process touches the environmental hidden states, that seems to be a form of vehicle externalism if it satisfies either form of vehicle externalism. I can't think of an easy way to relate the state version to our debate
    *
    (the way it's articulated on SEP seems to be in terms of already individuated mental states, which is 'too late' here..)
    (but I'm sure it exists), so I'll ignore it. Focussing on the process based one:

    I think Friston's account counts as the processed based vehicle externalism (at face value), since the environmental states do constitute part of the process of perception for him - and as we've talked about, it seems the Markov blanket of states involved in the perception processes extends beyond the body of the individual. Example, you're catching a ball, the current position of the ball (an external state) influences various sensory state nodes. Thus the external state is involved somehow in the perception process.

    Maybe it could be construed that the ball isn't a 'physical bearer' or 'partly constituting' the process of perception - if you focus on what's 'logged to consciousness' as a meaning for 'what's perceived', it might be possible to argue that 'what's perceived' doesn't have an immediate dependence upon the external state values because the sensory states interface with the world and the internal states which are logged to consciousness don't. There's probably some wrangling regarding where you draw the line. If the 'dependence' is 'any sort of dependence' rather than 'proximate cause in terms of states in the model's graph', it looks to be vehicle externalist in the process sense, if it's the latter maybe it's still possible to be a vehicle internalist.

    I also want to stress that this is talking entirely about dependence without talking about how content is determined. This isn't about 'what the information is' in perception, the overall values, it's about the relationships between information states - how information is passed around - in the process of perception. Topology/networking/relationality rather than individual state values/properties/qualitatives.

    The latter is more similar to what content is. If the frequency of reflected light from a position in your environment is an environmental hidden state, then the 'content' of its perception might be thought of as a colour associated with a location (with usual caveats regarding priors etc).

    But that doesn't do very well to bridge a gap between 'red cup' and state variables in the environment - especially because I can act to drink from a cup, but I cannot act to sip from a light frequency emanating from a location - the sensory data I act upon is never raw, so to speak. So there's a puzzle regarding bridging the 'content of a state in a neural network' with the content of an intentional act.

    The content of a state in a neural network doesn't seem to be a good match for the use of the word 'cup', since using the word to refer to a cup involves a perception which consists in lots of states synergising together in a body-environment interaction - ignoring hallucinations/illusions. In the case of hallucinations and illusions maybe there's no body-environment interaction
    **
    (I doubt this personally, but entertaining the idea)
    , but still in that case there's a complex synthesis of bodily states with each other to produce the hallucination - and the relationship of the cup to those states is just as mystifying. (Also probably of interest to @Banno).

    Content externalism (henceforth externalism) is the position that our contents depend in a constitutive manner on items in the external world, that they can be individuated by our causal interaction with the natural and social world.

    and an internalist when:

    Content internalism (henceforth internalism) is the position that our contents depend only on properties of our bodies, such as our brains. Internalists typically hold that our contents are narrow, insofar as they locally supervene on the properties of our bodies or brains. — SEP

    Which leaves a question regarding what mental content is. I think when we start talking about mental content in terms of desires, hopes, attitudes towards stuff... These get called intentional states, one of two things has to happen to make sense of them (though I'm sure @Banno would tell you that intentional states are directed towards statements (see here for related concepts, 'propositional attitudes' ):

    Let's say I want to take a drink from my mug. I have an intentional state toward my mug, desiring to drink something out of it. I'm sure there are more than two ways of spelling out their content relevant to this discussion, but I'm going to write down two.

    ( A ) The content of my intentional state of wanting to take the drink from my mug is an attitude toward a mug. The mug is thought of as a synthesis of external, internal, sensory and active states all in feedback, and my intention toward the mug is actually an intention toward my current internal representation of the synthesis. Putting it loosely, the mug and my current bodily state are in a very definite and urgent collective organisation, which I then operate upon in the manner I do (I reach toward it, taste it etc).

    If we think of this in terms of affordances, this puts the affordance ''for drinking'' in the mug, which I then action+intend.

    ( B ) The content of my intentional state of wanting to take the drink from my mug is an attitude toward a mug. The mug is thought of as a synthesis of external, internal, sensory and active states all in feedback, and my intention toward the mug is actually part of this synthesis.

    If we think of this in terms of affordances, this puts the affordance ''for drinking' in the relationship between the mug and me, so perceiving it as I do now contains my desire for drinking it.

    I think ( A ) invites perceptual intermediaries, since then all the different intentional states we have towards events are then separated out from perceptions. It would go external states -> representations -> intentional states(representations), rather than external states -representations> intentional states (this the same point I made regarding thinking of perception to be an arrow, here the arrow is labelled representation, rather than a node).

    I think ( B ) doesn't invite perceptual intermediaries, but I think it's got other problems. I think interweaving intentionality into perception and somehow 'beneath conscious awareness' is a standard phenomenological move, Heidegger calls it 'circumspective concern', Merleau-Ponty emphasises intentionality as pre-reflective (have a quote from Phenomenology of perception regarding intentionality and sex):

    “Erotic perception is not a cogitatio which aims at a cogitatum; through one body it aims at another body, and takes place in the world, not in a consciousness. A sight has a sexual significance for me, not when I consider, even confusedly, its possible relationship to the sexual organs or to pleasurable states, but when it exists for my body, for that power always available for bringing together into an erotic situation the stimuli applied, and adapting sexual conduct to it. There is an erotic ‘comprehension’ not of the order of understanding, since understanding subsumes an experience, once perceived, under some idea, while desire comprehends blindly by linking body to body. Even in the case of sexuality, which has nevertheless long been regarded as pre-eminently the type of bodily function, we are concerned, not with a peripheral involuntary action, but with an intentionality which follows the general flow of existence and yields to its movements. — MP, Phenomenology of Perception, 437

    I'm inclined to let intentional states be non-conscious and let them saturate perception, I'm sure someone who thought of intentionality as directed toward 'already formed objects' like cups etc, the kind of intentionality imaginable by directing 'a desire to drink' toward a mug (which seems to me derivative of the first, but that's another tangent).

    There was a final ambiguity I wanted to catalogue.

    Another distinction between the kind of directedness state relations have in a perceptual neural network and the kind of directedness intentional states have is the directedness of an intentional state might be an emergent
    *
    (I mean weakly emergent, but I'd guess there are strong emergentist takes too)
    property of the whole perceptual process. So it could be an category error to talk about how 'cup related states join with sensory/internal/active states' as if this were an intentional relationship between an agent and a cup. Or a more simple error in considered scope of the perceptual process, in taking too small a subprocess for a perceptual modelling relationship of the cup to make sense ("that ain't a directed conscious state toward a cup, that's a composite object of reflected light and thirst!").

    The same too maybe holds of content - it could very well be an emergent
    ***
    (or large scale in terms of nodes and time)
    property (or aggregate state) of the perceptual process.
  • Banno
    23.5k
    (though I'm sure Banno would tell you that intentional states are directed towards statementsfdrake

    Pretty much, with the usual qualification that an intentional state can be placed in the form of an attitude towards a proposition, and hence made the subject of discussion, but it need not be so treated. That is, we have unexpressed intentional states.
  • Andrew M
    1.6k
    I think that's a gross deflation of all the work that neuroscience has done on this. Most of the neuroscientists I've spoken to or listened to consider themselves to be investigating the matter of what perception is as a scientific investigation, not one in philology.Isaac

    I'm sure that's so. However when we see a red flower, do we see it in the brain, or in the mind, or in the garden? I'm not suggesting this applies to you but without clarification of the terms involved, this is the kind of confusion that can arise. For example, from Bennett and Hacker:

    The discoveries [of neuroscience] are no doubt splendid and fascinating - the conclusions sometimes drawn from them [eg that what we perceive are pictures in the brain (Crick), or `virtual reality' constructed by the brain (Smythies), or `movies in the brain' (Antonio Damasio)] are one form or other of latent nonsense (concealed transgressions of the bounds of sense) as we demonstrated in PFN. — Letter to the Editor: Reply to critical review by Professor John Smythies, Perception, 2011, volume 40

    ..

    That is, this red flower here is the intended object of my perception.
    — Andrew M

    I agree with this. It's the 'realism' bit. The object we're all trying (with our modelling processes and our social interaction) to react to is the red flower, out in the world. I don't see how it being the object of our intention somehow removes the 'veil' between us and it.
    Isaac

    If you agree that there can be a red flower there that I can perceive, then I'm not clear why you're invoking a "veil". What exactly is being veiled here?

    You're saying that any time we're mistaken about the properties of the object we've instead perceived nothing? If I perceive a flower, but in my mind it had red petals (I only briefly glanced at it). I return to it for a closer look and find I had merely assumed the petals were red - expectation bias - they were, quite clearly pink). Now I have to admit that I perceived nothing at all?Isaac

    The example I had in mind was a hallucination, which isn't perception. Yes of course there can be conditions where we see a flower that looks red (or assume is red), but isn't.
  • Isaac
    10.3k
    I baulk at equating each neural network with some attitude towards a proposition.Banno

    Why is that?

    (I've got a vague memory of us discussing this before, but I couldn't find it in the 'archives')
  • Isaac
    10.3k
    someone is an internalist about X if they believe X only is determined by/depends upon the body or mind of the individual which bears X.fdrake

    Do they have to believe in non-determinism of some sort? After all, our bodies have not been around forever (though mine sometimes feels like it has!)

    Rather, they are, partly, constituted by, or are composed of, factors that lie outside those boundaries. — SEP

    This is intriguing, do we have some examples? If I've understood it right, could my theories about the role of social narratives fit here (always looking for interesting new ways to frame this stuff)? That someone's beliefs often cannot be expressed without reference to the social entity which defines (part of) it? Is that what they're talking about, or have I missed the point entirely? - Side issue not related to the OP, I know. Just a quick yay or nay perhaps if I'm on the right track.

    Maybe it could be construed that the ball isn't a 'physical bearer' or 'partly constituting' the process of perception - if you focus on what's 'logged to consciousness' as a meaning for 'what's perceived', it might be possible to argue that 'what's perceived' doesn't have an immediate dependence upon the external state values because the sensory states interface with the world and the internal states which are logged to consciousness don't. There's probably some wrangling regarding where you draw the line. If the 'dependence' is 'any sort of dependence' rather than 'proximate cause in terms of states in the model's graph', it looks to be vehicle externalist in the process sense, if it's the latter maybe it's still possible to be a vehicle internalist.fdrake

    I think you're right in that it could, but I'm inclined to think (as I suspect you are) that it would be a mistake to do so. My reasons are that it would mistake the process of perception with the process of response (speaking, reaching - -catching the ball). We're in danger of an excessive holism this way, as I mentioned with the process of perception itself. we are one deeply interconnected body, within which is a deeply connected mind with causal connections reaching deep into the world around us. Without being comfortable with drawing (semi-)arbitrary lines around a process we end up unable to anything about anything. So, I think this is just such a case. Even though the perception of the ball is intricately connected to the process of catching the ball, we have to comfortable drawing a line between them somewhere, just as a façon de parler if nothing else. In that spirit, I'd say that the process of perception is directly connected to the ball, the process of catching only indirectly so. To the ball-catching system, the actual ball's location is a hidden state inferred by the signals it receives from the ball-perceiving system. To the ball-perceiving system, however, the ball's location is the direct cause of the state of some of it's nodes (certain retinal ganglia, for example) - the wind/air pressure/gravity/propulsion would be hidden states to that system.

    So there's a puzzle regarding bridging the 'content of a state in a neural network' with the content of an intentional act.

    The content of a state in a neural network doesn't seem to be a good match for the use of the word 'cup', since using the word to refer to a cup involves a perception which consists in lots of states synergising together in a body-environment interaction
    fdrake

    Again, I think allowing ourselves some arbitrary lines helps to talk about this. The 'content of a state in a neural network' is one of the exterior-facing nodes in the 'take a sip form that cup' system. But important missing nodes are things like 'what a cup is', what people in my culture do with cups', 'what effects are likely to result from sipping from it' etc. None of which are directly contained in the process of perception, despite being intimately linked to it.

    Did I ever tell you (sorry if I end up repeating myself) about the experiment on macaques where they but a blocker to interfere with the connection between the dorsal and ventral streams exiting the visual cortex. They could interact with bananas in a perfectly accurate manner (locate them, pick them up with appropriate pressure etc), even peel them, but they had no idea what they were (food, bargaining tokens etc). I think this shows that there's a perfectly acceptable (semi-)arbitrary line we can draw if we so wish between the object of perception as a physical emitter of light/sound/pressure and the object of perception as a social object (one with a name, a use, a role in our intentions)

    Let's say I want to take a drink from my mug. I have an intentional state toward my mug, desiring to drink something out of it. I'm sure there are more than two ways of spelling out their content relevant to this discussion, but I'm going to write down two.fdrake

    I think essentially we'd be remiss if we didn't include our intentions toward an object in the act of perception, but again if we're not to prevent ourselves from being able to say anything at all, we have to be able to draw a line somewhere. I may be oversimplifying, but is there any reason why we shouldn't draw the line at the decision to act? If we're asking the question "Why did you hit your brother?" we might well include intentionality in the perception "he was about to hit me", did our aggressive intention have some role in the perception of the shoulder going back, the fist clenching - probably. But at the point of the message being sent to the arm to strike - that's the point we're interested in - not because it's got some ontological significance, but because that's what we asked the question about. At that point, there was an object (a brother threatening violence) which was the result of some perception process (plus a tone of social conditioning) and the object of an intention (to punch). I don't think it matters that the intention (to act aggressively) might have influenced the perception (a person about to hit me). We can have our cake and eat it here. We can talk about the way in which the intention influences the perception of the object before the question we want to ask of it and still have the final version* be the object of the intention we're asking the question about. (*final version here referring to the object on which the move to strike was based). after the action in question, the whole process will continue seamlessly, the perception might change a bit as a result of our interaction with the object, our intention might change and so affect the perception..., but we marked a point in that continuous process, simply to ask a question (why did you hit your brother) and to answer that question we need to 'freeze-frame' the movie to see what the object of perception was at the time the intentional decision was made.

    It's not different to asking what speed a car's going. You have to just pick an arbitrary distance and measure how much time it took to cover it. It inevitably makes artificial break in a continuous process, but it's the only pragmatic option.

    Another distinction between the kind of directedness state relations have in a perceptual neural network and the kind of directedness intentional states have is the directedness of an intentional state might be an emergent(I mean weakly emergent, but I'd guess there are strong emergentist takes too) property of the whole perceptual process.fdrake

    Like saccades, perhaps? Yes, I think there must be cases where this is true, but again, probably just some, not all. We'd be missing something if we wanted to model perception and action this way, but we'd be kidding ourselves if we didn't have such a model to explain things like saccades.
  • Isaac
    10.3k
    However when we see a red flower, do we see it in the brain, or in the mind, or in the garden? I'm not suggesting this applies to you but without clarification of the terms involved, this is the kind of confusion that can arise.Andrew M

    Yes, it's the primary difficulty here. If I (as a scientist) am to explain what your 'seeing the rose in the garden' consists in, I can't very well give the answer "you're not seeing the rose in the garden". That didn't really answer the question. But equally, I'd be remiss if I didn't provide an explanation of how you can see the red rose out of the corner of your eye despite dendritic trees from the ganglia there being too complex to interpret colour from. You filled-in the colour you expected the rose to be, nothing to do with any physical activity in the actual garden.

    If you agree that there can be a red flower there that I can perceive, then I'm not clear why you're invoking a "veil". What exactly is being veiled here?Andrew M

    Because the 'red flower' I'm trying to model and the current 'snapshot' state of my model are not necessarily the same, and some of the reason they're not the same is expectation biasing the interpretation of (and occasionally outright suppressing) the sensory data. It's only the sensory data which is directly connected to the 'red flower', the thing I'm trying to model. The 'veil' is everything else which plays a part in the modelling process not caused directly (or even indirectly) by the 'red flower'.

    The example I had in mind was a hallucination, which isn't perception. Yes of course there can be conditions where we see a flower that looks red (or assume is red), but isn't.Andrew M

    Then how is that not a 'veil'? If we can see a flower as red.but it isn't red, then what got in the way? Whatever got in the way - that's what I'm referring to as a 'veil'.
  • frank
    14.7k
    . Speech, action, emotional responses, strategies, and more complex mental reactions. These all result from the perception of the flower, not the flower.Isaac

    :up:
  • Harry Hindu
    4.9k
    All of this can be put simply a "Spider hallucinations look like spiders" - no use of "qualia"!

    What's relevant about an hallucination of a spider is that thereis no spider. Hence, as you point out, characterising some event as an hallucination presumes realism.
    Banno
    What does it mean for hallucinations to look like the real thing? How can something that isn't real look like something that is?

    And what does it mean to say that the hallucination isn't real? Are you saying that hallucinations themselves aren't real, or that they don't represent anything that is real? If the latter, then aren't we talking about representations (qualia) vs what is represented (spiders)? And are the representations real things themselves?

    To even talk about hallucinations and compare them with other things must mean that you think that they are real and have real effects in the real world, and can be compared to real things. How can you compare something that isn't real with something that is?

    To be sure, realism is the view that there is stuff in the world that is independent of the mind, so the claim that what is real is stuff in the mind would not count as realism.Banno
    But "real" in what sense? You seemed to agree earlier with the statement, "we are our minds". Are you saying that "we" and our "minds" are not real?
  • Andrew M
    1.6k
    Yes, it's the primary difficulty here. If I (as a scientist) am to explain what your 'seeing the rose in the garden' consists in, I can't very well give the answer "you're not seeing the rose in the garden". That didn't really answer the question. But equally, I'd be remiss if I didn't provide an explanation of how you can see the red rose out of the corner of your eye despite dendritic trees from the ganglia there being too complex to interpret colour from. You filled-in the colour you expected the rose to be, nothing to do with any physical activity in the actual garden.Isaac

    :up:

    Because the 'red flower' I'm trying to model and the current 'snapshot' state of my model are not necessarily the same, and some of the reason they're not the same is expectation biasing the interpretation of (and occasionally outright suppressing) the sensory data. It's only the sensory data which is directly connected to the 'red flower', the thing I'm trying to model. The 'veil' is everything else which plays a part in the modelling process not caused directly (or even indirectly) by the 'red flower'.Isaac

    :up:

    Then how is that not a 'veil'? If we can see a flower as red.but it isn't red, then what got in the way? Whatever got in the way - that's what I'm referring to as a 'veil'.Isaac

    So the phrase "veil of perception" has a historical connection with certain 17th century metaphysical views that deny any direct world-involvement. But your use of the term is different to that - you allow that we see things as they are at least some of the time which is sufficient to get a footing in the world. That is, our model of (some part of) the world and the world we are modeling sometimes match up. I think we essentially agree.
  • fdrake
    6k
    Do they have to believe in non-determinism of some sort? After all, our bodies have not been around forever (though mine sometimes feels like it has!)Isaac

    I think being a content or vehicle internalist is independent of non-determinism. If determinism is the claim that mental content or mental vehicles have their behaviour completely determined by prior states, or past events, that leaves the type of relationships possible between mental contents and how mental vehicles work largely untouched. Similarly, if someone's an externalist, they could believe the environment is has some entirely random states in it. Determinism/non-determinism regarding mental events to me seems like a question of to what extent past events constrain present events, rather than a question of which events together count as mental or how mental stuff works.

    A move which gets taken is to massage the notion of dependence and the type of content. You could 'bite the bullet' of whatever externalist argument you like which was dedicated to mental content of type X and say "Yes, type X as a whole has some external dependence, but type X1 which is a subset of X does not", I think that type of mental content gets called 'narrow'.

    A narrow content of a particular state is a content of that state that is completely determined by the individual's intrinsic properties. An intrinsic property of an individual is a property that does not depend at all on the individual's environment. For example, having a certain shape is, arguably, an intrinsic property of a particular penny; being in my pocket is not an intrinsic property of the penny. This is because the penny's shape depends only on internal properties of the penny, whereas the fact that it is in my pocket depends on where it happens to be, which is an extrinsic property. The shape of the penny could not be different unless the penny itself were different in some way, but the penny could be exactly the way it is even if it were not in my pocket. Again, there could not be an exact duplicate of the penny that did not share its shape, but there could be an exact duplicate that was not in my pocket. Similarly, a narrow content of a belief or other mental state is a content that could not be different unless the subject who has the state were different in some intrinsic respect: no matter how different the individual's environment were, the belief would have the same content it actually does. Again, a narrow content of an individual's belief is a content that must be shared by any exact duplicate of the individual. (If some form of dualism is true, then the intrinsic properties of an individual may include properties that are not completely determined by the individual's physical properties. In that case an “exact duplicate” must be understood to be an individual who shares all intrinsic nonphysical properties as well as physical ones.) — SEP on Narrow Content

    Phenomenal stuff, what is it like to be you stuff, can be thrown into the narrow content category. I'm clearly not sympathetic with that, but it's a clear idea. If there are instrinsic properties to some mental content, those aren't relational properties, determination is a relational property, so those intrinsic properties aren't determined, but that means some part of mental content is not determined by external states since whatever 'internal states' determined that content suffice to... determine it. Not sympathetic, but...

    There is less ineffable/radical way of interpreting narrow content - like afaik Chalmers does. Recall that content externalism was the claim that every mental state's content is determined in part by some non-bodily or otherwise external cause, and internalism was the claim that that not every mental state's content is determined in part by some non-bodily or otherwise external cause. That lets you nominate a type of content and perhaps a type of determination to be an internalist regarding. So an internalist can be an externalist regarding most if not all types of mental content, but still be an internalist in some regard because they are an internalist regarding some type of mental content. Externalism says all, internalism says not all.

    An ambiguity in that ambiguity clarification (sorry) is that if internalism regarding content regards content independent of state which the content is of, then it's quite different from considering content as part of state. EG if you have a qualia of the smell of your coffee, the overall state of smelling it stands in a relationship to the stimulus (and thus might be thought of as 'being wide') but the particular qualia profile of your coffee
    *
    (at that moment etc, give it whatever charity you need to to make sense of the idea)
    can still be intrinsic and thus undetermined. If you slice up the content of experience, you can partition out the intrinsic from the non-intrinsic bit, and the intrinsic bit is narrow, even if you wanted to say that all states are broad and that people are 'in the world'. I don't know how popular that is as a move, or how important that wrinkle is.

    Anyway onwards, IEP has a summary of a point Chalmers has made about a candidate type of narrow content, what he calls epistemic content in this paper. I won't even pretend to understand the guts of the argument. Here's the IEP summary:

    David Chalmers builds on this conceptual role account of narrow content but defines content in terms of our understanding of epistemic possibilities. When we consider certain hypotheses, he says, it requires us to accept some things a priori and to reject others. Given enough information about the world, agents will be “…in a position to make rational judgments about what their expressions refer to” (Chalmers 2006, 591). Chalmers defines scenarios as “maximally specific sets of epistemic possibilities,” such that the details are set (Chalmers 2002a, 610). By dividing up various epistemic possibilities in scenarios, he says, we assume a “centered world” with ourselves at the center. When we do this, we consider our world as actual and describe our “epistemic intensions” for that place, such that these intentions amount to “…functions from scenarios to truth values” (Chalmers 2002a, 613). Epistemic intensions, though, have epistemic contents that mirror our understanding of certain qualitative terms, or those terms that refer to “certain superficial characteristics of objects…in any world” and reflect our ideal reasoning (for example, our best reflective judgments) about them (Chalmers 2002a, 609; Chalmers 2002b, 147; Chalmers 2006, 586). Given this, our “water” contents are such that “If the world turns out one way, it will turn out that water is H20; if the world turns out another way, it will turn out that water is XYZ” (Chalmers 2002b, 159).

    (SEP also has one in its 'narrow mental content' article linked previously)

    It might (IEP clarification) be an analogue to the 'Markov Blanket' idea as you use it to 'veil' the world from perceptions for the purposes of explaining mental states ((Chalmers paper on some related and underlying notions). As I read it and try to shoehorn it into this conversation, if you fix all the external states (facts, possible world truth evaluations) of a scenario and leave them unobserved at time t, you imagine all the internal states at time (t-1) just before they come into contact with the external states, and imagine the kinds of predictions that the internal states of the person can provide before being exposed to the current batch of external states - that system of predictions will completely rule out some things from occurring (as we think anyway, we might be wrong) and largely endorse some things, it will split up 'epistemic space' into what's plausible, irrelevant, implausible etc. But what splits up the predictions is arguably solely determined by non-external properties, since you just fixed them.

    Why I did not spend more time with the Chalmers papers exegetically
    (I don't feel like I can give them a fair hearing at time of posting and they're very technical, involving twin earths with multiple notions of modality in them


    I imagine you maybe have some sympathy with a view like that? Because it seems you're quite happy to fix an instance of the perceptual chain and treat the current 'content' of the 'write instruction to memory' as what is said to be perceived, in a manner where the antecedent internal states determine what is logged or not logged to memory (arguably a form of paritioning epistemic space coincident with the write instruction).





    This is intriguing, do we have some examples? If I've understood it right, could my theories about the role of social narratives fit here (always looking for interesting new ways to frame this stuff)?Isaac

    It fits as a form of content or vehicle externalism. If social stuff or processes count as determinative of mental content, and they are outside the body or brain, then that's (at least very close to) an externalist thesis regarding the relationship of social stuff to mental content. Similarly if social processes act as some kind of distributed mental process - cf Lakatos' term for reasoning with people 'thinking loudly' -, then social processes are vehicles in that regard. The latter seems like an extended mind thesis towards the social milieu.

    I think essentially we'd be remiss if we didn't include our intentions toward an object in the act of perception, but again if we're not to prevent ourselves from being able to say anything at all, we have to be able to draw a line somewhere. I may be oversimplifying, but is there any reason why we shouldn't draw the line at the decision to act? If we're asking the question "Why did you hit your brother?" we might well include intentionality in the perception "he was about to hit me", did our aggressive intention have some role in the perception of the shoulder going back, the fist clenching - probably. But at the point of the message being sent to the arm to strike - that's the point we're interested in - not because it's got some ontological significance, but because that's what we asked the question about. At that point, there was an object (a brother threatening violence) which was the result of some perception process (plus a tone of social conditioning) and the object of an intention (to punch). I don't think it matters that the intention (to act aggressively) might have influenced the perception (a person about to hit me). We can have our cake and eat it here. We can talk about the way in which the intention influences the perception of the object before the question we want to ask of it and still have the final version* be the object of the intention we're asking the question about. (*final version here referring to the object on which the move to strike was based). after the action in question, the whole process will continue seamlessly, the perception might change a bit as a result of our interaction with the object, our intention might change and so affect the perception..., but we marked a point in that continuous process, simply to ask a question (why did you hit your brother) and to answer that question we need to 'freeze-frame' the movie to see what the object of perception was at the time the intentional decision was made.Isaac

    Think we see eye to eye there.

    Like saccades, perhaps? Yes, I think there must be cases where this is true, but again, probably just some, not all. We'd be missing something if we wanted to model perception and action this way, but we'd be kidding ourselves if we didn't have such a model to explain things like saccades.Isaac

    Yes I see it as implausible that intentionality isn't right down in the motor functions considering the directedness of visual foraging, that it's not conscious, and that it's salience+causal relevance+information density based. I recall having a long argument with @Banno about whether the intentionality in saccades counts as a form of belief that wasn't propositional (I argued that it was not propositional), so that might be another point of tension with someone who's quite strict about the relationship of mental content to statements and truth conditions.
  • Janus
    15.7k
    I recall having a long argument with Banno about whether the intentionality in saccades counts as a form of belief that wasn't propositional (I argued that it was), so that might be another point of tension with someone who's quite strict about the relationship of mental content to statements and truth conditions.fdrake

    Would the intentionality in saccades best be called 'belief' or 'expectation'? Regardless of that, would it not be the case that such expectations or beliefs, although not present to consciousness in propositional form, could be rendered as such?
  • fdrake
    6k
    Would the intentionality in saccades best be called 'belief' or 'expectation'?Janus

    I think that's a big rabbit hole.

    although not present to consciousness in propositional form, could be rendered as such?Janus

    I don't think the majority of eye movements can in principle be logged to consciousness - at least not in the standard way you would expect for a statement involving account of propositional content. When you see a person, the pattern of eye movements which elicited that information is quite clearly directed towards informative subobject features (or areas where they are anticipated/'believed' to be etc) which then get synthesised/combined into the distinct perceptual features we can then consciously identify in our visual field at the time. If the saccade movement has content which can be explicated, it doesn't resemble anything like the intentional state of belief directed toward the visual content the saccade helps elicit (the representation/image/look of the person). Eye movements haven't seen what you've seen, because they are how you've see what you've seen.

    You can tell what Terry Pratchett referred to as 'lies to children' about the content of saccades in terms of propositions, though. EG, someone might 'look at a chin to provide more information about the orientation of a face', but there's no conscious event of belief or statement associated with what the saccade's doing at the time, the speech act which associates the propositional content with the saccade is retrospective.

    Analogy, it's like saying the content of this pile of Lego blocks is castles because Lego in general can be used to build castles.

    If the criterion you're going for with 'the content of the saccade is propositional' is 'the content of the saccade can be set out as a proposition the agent is conscious of at the time', then it's false because we're not conscious of most saccades at the time. If it's 'the content of the saccade can possibly be set out as a proposition by a conscious agent after the fact', I'd be more inclined to agree that the content is propositional, with the rider that someone who accepts that criterion seriously strains the connection between propositional content and intentional acts - since propositional content and intentional acts no longer need to occur together in the same event for the retrojected content to count as the target of the intention at the time... Or alternatively if someone were to state that all beliefs amount to in total is a language game of telling 'useful lies to explain our behaviour' and didn't really care about their literal truth, I'd buy that too.

    There are reasons why people describe saccade content like this: "During the search phase (visual exploration for an object), subtask relevant features are attentionally prioritised within the attentional template during a fixation', even when they're speaking about a specific sub part of a perception (a single time the eyes still).

    Last time I went through that with @Banno, I recall falling into the trap of providing 'lies to children' examples to flesh out what the author was writing about from their example, which was turned into proof that I could explicate the content. This time I would advise any reader to guess precisely the object the person was searching for and what the 'subtask relevant features' and 'attentional template' are based on that.

    Spoilers
    (Spoilers: if I recall correctly it was a teabag in a box of tea which the person was about to pick out. The subtask relevant features might be things like a protrusion of paper to grab hold of, a shuffling sound, a texture confirming touch and location information. The attentional template is nebulous sense of what one should look out for when searching for teabag within the box. Also, notice that the attentional template is 'within a fixation' - a momentary rest of the eyes to form a stable visual image is still saturated with directional/intentional content. Way too much going on which is intentional, and the majority is discarded and thus can't be recalled or explicated at the time. Afterwards? In principle? Maybe.)


    (Proof I'm not making shit up about the eye movements and features).
  • frank
    14.7k


    In some Chalmers book he said there's a silent verbal stream that coincides with attention. Maybe in a very rudimentary way data gathering is expressed propositionally. It's not impossible anyway.
  • Isaac
    10.3k
    So the phrase "veil of perception" has a historical connection with certain 17th century metaphysical views that deny any direct world-involvement.Andrew M

    Ahh. I'm not very well versed in these things so didn't realise I was sowing more confusion than I was eradicating. Best I avoid the term in future, thanks.

    our model of (some part of) the world and the world we are modeling sometimes match up. I think we essentially agree.Andrew M

    Yeah. We have a vested interest in them matching up, not just with the world, but (and this is the really important part, for me) with each other's models. In fact I'd be tempted to go as far as to say that it's more important that our models match each others than it is they match the state they're trying to model. I'm pretty sure this is main function of many language games, the main function of social narratives, the main function of rational thought rules. To get our models to match each others.
  • Isaac
    10.3k
    A move which gets taken is to massage the notion of dependence and the type of content. You could 'bite the bullet' of whatever externalist argument you like which was dedicated to mental content of type X and say "Yes, type X as a whole has some external dependence, but type X1 which is a subset of X does not", I think that type of mental content gets called 'narrow'.fdrake

    Yeah, I see how that could work. I was more thinking of the whole chain such that some mental content could not be wholly dependant on other mental content because at some prior time their was no mind. Kind of a mental version of the Kalam Cosmological Argument (and no-one wants to go there!).

    that system of predictions will completely rule out some things from occurring (as we think anyway, we might be wrong) and largely endorse some things, it will split up 'epistemic space' into what's plausible, irrelevant, implausible etc. But what splits up the predictions is arguably solely determined by non-external properties, since you just fixed them.

    Why I did not spend more time with the Chalmers papers exegetically


    I imagine you maybe have some sympathy with a view like that?
    fdrake

    Yeah, that's right (and thanks for the great links, not generally a fan of Chalmers - too much qualia and zombies - but always satisfying to read some crossover, makes me realise we're not all radically different, still trying to navigate the same world!). For me, it's more pragmatic than anything else. We simply can't be including all the wider net of influences in all our thinking about any given step or node in a network, we'd never get anything done. I can, however, see how this causes a great deal of confusion about my position, and some of that is my fault for (innocently) stepping into a world where people genuinely do try to argue for things like an entirely mind-dependant reality, and not being careful enough to distinguish my thinking form theirs. Though I should say, I have (from reading the papers you cited) some grave concerns about the route Chalmers takes to get here (if here is indeed where I think he is - I suspect my ability to understand what he's on about is substantially less than yours). I'm not sure that the modality is actually a viable approach if he's trying to get at the way we actually think. There's too little scope in a kind of 'this else that' model where I think It's more 'this until further notice', but I may have misunderstood.

    if social processes act as some kind of distributed mental process - cf Lakatos' term for reasoning with people 'thinking loudly' -, then social processes are vehicles in that regard. The latter seems like an extended mind thesis towards the social milieu.fdrake

    By odd coincidence, see my response to Andrew above, I happen to have started banging on about that exact topic...

    Yes I see it as implausible that intentionality isn't right down in the motor functions considering the directedness of visual foraging, that it's not conscious, and that it's salience+causal relevance+information density based. I recall having a long argument with Banno about whether the intentionality in saccades counts as a form of belief that wasn't propositional (I argued that it was not propositional), so that might be another point of tension with someone who's quite strict about the relationship of mental content to statements and truth conditions.fdrake

    So, there's these strong connections which neuroscientists (to my knowledge) have yet to fully work out the function of between early areas of sub-conscious cortices and the hippocampus, an example might be the V2 region of the visual cortex. Usually a connection to the hippocampus is involved in consolidation of some memory, so it seems odd that such early regions would be strongly tied to it. One idea is that there's some higher level modelling suppression going on even in these early centres, like - 'is that likely to be an edge? Let me just check'. I think (though I can't lay my hands on any papers right now) there's one of these connections into the cerebellum too.
  • Isaac
    10.3k
    Would the intentionality in saccades best be called 'belief' or 'expectation'?Janus

    What do you see as the difference between the two?
  • frank
    14.7k


    Some aspects of human consciousness are the same as that of a worm, right? Is it appropriate to think of a worm's consciousness as intention driven? Are going to end up equivocating about "intention" if we do?
  • Isaac
    10.3k
    Is it appropriate to think of a worm's consciousness as intention driven? Are going to end up equivocating about "intention" if we do?frank

    You'd like to reserve 'intention' for conscious species? I see that as more of a religious/ideological decision than a scientific one. Not that that makes it wrong, but it's just not a useful distinction for me.

    I suppose we could differentiate between actions which we can trace through some typically conscious areas of processing and those that don't, but it'd be a very hazy line.

    I certainly think it would be possible, but I'm not quite as clear on what we'd gain by doing so.
  • frank
    14.7k
    You'd like to reserve 'intention' for conscious species?Isaac

    Hold up. What do you mean by "conscious" here? What is a worm missing that it would need in order to be conscious?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment