• creativesoul
    11.9k
    Phenomenology is a philosophical position that aims to explain conscious experience. It is an explanation.
    — creativesoul

    I don't think so. "Phenomenology is the study of structures of consciousness as experienced from the first-person point of view" -the standard encyclopedia of philosophy. Saying that phenomenology is an attempt at explaining consciousness is like saying that newtonian mechanics is an attempt at explaining why "forces" and "energy" exist...
    khaled

    Non sequitur. That doesn't follow from anything I've said here. What are you talking about? I've not said anything at all about caring 'why' the components of conscious experience exist. I'm talking about how it happens. There's a significant bit of irony here, given that you're misattributing meaning to my words, and the attribution of meaning is itself the emergence of conscious experience... it's the how part.

    Your belief is not required here, so you do not have to think so. You're perfectly within your rights to think otherwise.

    "Phenomenology" is the name of a certain philosophical school of thought. That school of thought had/has as it's target something that existed in it's entirety prior to our reports of it:human conscious experience.



    Newtonian mechanics doesn't care about why its components exist, it is a study of how they interact. Same with phenomenology

    Name these components of which all conscious experience consists.
  • creativesoul
    11.9k
    The connection between this "internal" experience and the "external" world is consequently mysterious.
    — Andrew M

    Loosely speaking, 'the connection' is the experience, on my view.

    It consists of both internal and external, physical and non physical, subjective and objective. The problem I seem to see is that both sides miss this. Experience is neither objective, nor subjective; neither internal nor external; neither physical nor non physical...

    It is both.
    — creativesoul

    Or neither
    Andrew M

    Yes. Erring on the side of neither dispenses with the inherently inadequate dichotomies altogether.



    I think the divisions themselves, as understood in their Cartesian sense, are misleading and unnecessary. They don't arise in normal communicationAndrew M

    However, I'm hesitant if all experience has internal and external components, physical and non physical components; something to be connected and a creature capable of making connections, where the connections are the neither part but that which is being connected is one or the other(or both in the case of metacognitive endeavors).

    So, while the subjective/objective dichotomy can be thrown out simply by granting subjectivism in it's entirety, I'm wondering about whether or not the internal/external and physical/non physical dichotomies can be equally dispensed with.
  • creativesoul
    11.9k
    We know experiences are caused by brains. But we do not know that the same experiences are caused by everyone's brains. As in I don't know if when I look at a red apple and you look at a red apple we both have the same expereince. I know we both call it "red" and it has largely the same relationship in our brains. As in mostly everything I call red you also call red or orange or something around there (assuming neither is colorblind). That does not give evidence that we are experiencing the same thing.

    Throws the same thing at khaled...

    The same experience is not caused by the same brain... Thus, the variation cancels out. It tells us nothing except that each and every experience is unique, and that no report regardless of first or third person perspective can be complete. But so what? No one is asking for a complete explanation of the red apple being one component of many in the conscious experience of seeing a red apple(or being hit in the arse with it).
  • creativesoul
    11.9k
    You said before that you disagree with Dennett and that the neurology does not explain why we have a conscious experience. So are you proposing that you have a solution to that problem? If so what is it?khaled

    What problem?

    Neurology is a discipline that tells us much about how conscious experience happens.
  • Wayfarer
    22.5k
    I appreciate the explanation, but I'm still not seeing the 'study'. If one performs this 'bracketing' then one has list of experiences which one just accepts unquestioningly as being what they are. Great. What have we learned that we didn't previously know?Isaac

    It takes work, it takes training. Say in your case, you interpret pretty much everything in terms of a scientific framework - objective facts, satisfactory explanations and so on. That is internalised in such a way that it becomes second nature to you. Becoming aware of such patterns of thought is an aspect of that kind of training.

    We've come a long way since Descartes, but where his outlook lingers is in the shadow of 20th century attempts to push materialism to it's limit: to remove all of the things Descartes labeled as internal.frank

    :up:
  • fdrake
    6.6k
    Yes, but again with caveats I'm afraid. I presume you're talking about mutually exclusive variables to an extent (again with ceretis paribus). In normal circumstances all four would collectively determine - ie there's no other factor - I want to leave aside the thorny issue of whether there might be some random factor for the moment as I don't think it's relevant (my gut feeling is that there might be at least a psuedo-random one resulting from the chaos effect of such a complex system).Isaac

    I definitely should've highlighted that I was claiming in normal circumstances items (1) to (4) do collectively determine the process of perceptual feature formation. I do think they're mutually exclusive components of perceptual feature formation - they have different names and play different procedural parts - but all four variable types are informationally and causally connected so long as there's an agent actively exploring an environment during a task. When I said the types are connected, I mean some variable that belongs to each type is connected to a variable that belongs to some other type, though it need not be a direct contact in order for it to count as connected. In the network of variables in the model, that would correspond to there being a path from a variable in every type to some variable in every other type, rather than having every variable in each type being a neighbour of some variable in each other type. If that's super dense, it's the same as colouring task parameter and prior variable nodes red and hidden state nodes blue in whatever variable network the model has then saying "there's at least one arrow between the red ones and the blue ones".

    So they're "mutually exclusive" in terms of being qualitatively distinct variable types in the variable network of the model, but they're not thereby causally or statistically independent of each other since they're connected.

    We may have got crossed wires. What I mean by saying that the thing modelled is 'the apple' which is a public model, is not intended as an entanglement of some hidden state with the public model. It's a limit of language (which is what I was trying to get at in my edit). The process of 'seeing' could be seen as essentially that of fitting sensory data to priors (filtering of priors being task dependant). So the meaning of 'I see an apple', might be 'the sensory input best fits the public model of 'apple'', but this is not that same as saying that we see 'model-of-apple', because that would be to make that Cartesian divide of 'seeing' into object>qualia>perception(of qualia). It's just that that's what 'seeing' is, so it's only correct to say we 'see the apple'.

    If we wanted to phrase all this in terms of purely Markov Chains in the process of perception, then I don't think we can say any more than that the cause of of our perceptual feature has no name. We do not name hidden states, we only name objects of perception.
    Isaac

    Edit - Another way of putting this (the language gets complicated) might be to say that we do name the hidden state (apple), but that these christenings then produce fuzziness on the hidden states we could possibly refer to in any given instance of perception - so the hidden state that is in direct causal relationship with our perceptual system will be only fuzzily identified by any word we apply. I'm not sure which approach is best (if any), I don't think we've really got the linguistic tools we need to develop theories about objects of perception

    I think we did get wires crossed, but I suspect we disagree on something somewhere. Maybe in the nature of that entanglement and the relationship language plays to it. One way of reading the second paragraph makes how language is used consequent of perceptual feature formation. So it would go like: hidden state -> apple perceptual features -> "I see an apple". But AFAIK there are also models
    *
    (I think we've talked about this before on forum in the context of Barrett's work)
    that look more like:

    hidden state -> categorising of sensory inputs -> output perceptual features

    hidden state -> categorising of sensory inputs -> language use

    but also with:

    output perceptual features -> categorising sensory inputs

    and

    language use -> categorising sensory inputs

    feedbacks somewhere in the model. So once someone is categorising sensory inputs in a sufficiently mature way, they already have prior language use and prior perceptual feature feedforwards into the categorisation of sensory inputs.

    To be clear, by categorising sensory inputs I mean a device that distinguishes foraged data generated by hidden states and aggregates them into related salient types based on previous model states. This is part of perceptual feature formation. For example, that I see the duck in the duck rabbit or the rabbit at any given time. The salience bit says I see a duck or a rabbit, not a meaningless scribble. The types are the duck and the rabbit. Categorisation is assigning something a type.

    To put it starkly, it seems to me that there's evidence that language use plays some role in perceptual feature formation - but clearly it doesn't have to matter in all people at all times, just that it does seem to matter in sufficiently mature people. Language use seems to get incorporated into the categorisation aspect of perceptual feature formation.

    The layout of lines on the page isn't changing in the duckrabbit, but the state of my perceptual models regarding it is varying in time - at one time the pair of protrusions function to elicit the perceptual feature of rabbit ears, at another they function to elicit a duck's bill.

    So the issue of the degree of "fuzziness" associated with labelling hidden state patterns with perceptual feature names comes down to the tightness of the constraint the hidden states place upon the space of perceptual features consistent with it and the nature of those constraints more generally.

    I would like to highlight that the duckrabbit stimulus can only cause model updates after its observation. So in that respect, the hidden states which are constitutive of the duckrabbit picture act as a sufficient cause for for seeing it as a duck or a rabbit, given that someone has a perceptual system that can see the layout as a duck or a rabbit. But only a sufficient cause when conditioning on the presence of a suitably configured perceptual system.

    Analogy, "if it looks like a duck and quacks like a duck it's a duck", taking that way too literally, if someone observed something that quacked like a duck but did not look like a duck, on that basis alone the believer of "if it looks like a duck and quacks like a duck..." could not conclude that it was a duck. But if they then observed the thing making the quacking noise looked like a duck, they could immediately conclude that it was a duck. The "quacking" was in place already, so "it looking like a duck" was sufficient for the conclusion that it was a duck.

    Translating out that analogy; we expect certain configurations to be ducks and certain configurations to be rabbits - we expect ears and rabbit faces on rabbits, bills and long necks on ducks - if you show someone who will see bills and long necks appropriately arranged as a duck a picture of a duck, they will see the duck. To be sure that's not a very complete list of duck eliciting hidden state patterns
    *
    (and someone who doesn't know what a duck is will probably see what we would call a duck, just not "package" those patterns together as a duck)
    ... But I shall assume you know what I mean.

    In summary: I think that issue boils down to whether the duckrabbit's hidden states associated with page layout cause duck or rabbit given my priors and task parameters. I think that they do. It isn't as if the hidden states are inputs into a priorless, languageless, taskless system, the data streams coming out of the hidden state are incorporated into our mature perceptual models. In that respect, it does seem appropriate to say that the hidden states do cause someone to see a rabbit or a duck, as one has fixed the status of the whole model prior to looking at the picture.
  • fdrake
    6.6k
    Yep, I think that actually a good way of putting it. I've described myself as an indirect realist before, but these are not terms I have a in-depth knowledge of, so I'm not attached to them. My question was really just getting at the issue of how we define the boundaries of a 'perceptual system'. Where does the perceptual system end and some other system take over (even if only in theory to show that it never does)? If we just say that the boundaries of the perceptual system are the edge of the Markov Blanket, then your version of direct realism is true, but only by definition (ie if some other process intervened between the hidden state and the perceptual system it would, by definition, either be a hidden state itself or part of the perceptual system).Isaac

    I think that's about what I meant. Don't come away from what I've said with the idea that "how fdrake thinks about direct realism" is canonical though. For me directness is just a lack of perceptual intermediaries. I think people who are professionally direct realists have different commitments to that. For some it seems to come down to mind dependence vs mind independence of what is perceived
    *
    (and clearly that intersects with the perceptual intermediary debate, mind dependent perceptual intermediaries are strongly intuition pumped by arguments from dreaming/illusion)
    , and there's also an element of whether (and how) perceptual features are real.

    So to get a Cartesian Theatre problem (in order to disprove it empirically rather than definition-ally) we'd have to say that the creation of 'the play' out of some hidden states was not part of the perceptual system - the perceptual system was the bit watching the play. If we say the play-making mechanisms are part of the perceptual system then the system is in direct causal relationship with the hidden states (it's just that the description of the perceptual system is wrong). I don't see anything wrong here at all, I only wanted to clarify which way you were looking at it.

    Aye. I think that's true. I think the directness claim (no perceptual intermediaries) is an easy consequence of any active perception account which includes environment exploring actions as a component of perception. If you pick something up, there has to be causal contact between the mass of the thing how you sense and adapt to loads. If "how you sense and adapt to loads" in total is labelled as (a part of) perception, then perception (as a process) is in direct causal contact with the world.

    Though devil's advocating it, that direct causal contact could be between a perceptual intermediary and the mass. But I think that requires the Cartesian Theatre metaphor to be true - it would only be an intermediary if the perceptual intermediary was submitted to some distinct faculty or process
    *
    (note: not talking about passing inhibited patterns of signals around in the process of feature formation, the more cognitive aspects are lumped in)
    . So that's going to turn on whether it's more appropriate to emphasise action in perception than a "submission" process to consciousness as a distinct faculty.

    Without that submission process and with an active account of perception, directness in the sense I meant (I think) is implied. It is almost true by definition (within the account of perception), but whether it's supported in practice turns on the behaviour of the account using it and accounts which don't use it.
  • Marchesk
    4.6k
    What problem?

    Neurology is a discipline that tells us much about how conscious experience happens.
    creativesoul

    Yeah, but as Luke in this thread (and Chalmers elsewhere) have pointed out, it doesn't explain why any physical system would be conscious. Our understanding of physics would not predict this if we weren't already conscious. A nervous system wouldn't fundamentally be different than a computer with input devices, in that regard.

    Why do we see colors and feel pain when no other physical system does this, far as we can tell? What would it take for a robot to do so? Did Noonien Soong sliip a qualia chip into Data's positronic brain?
  • Isaac
    10.3k
    the explanations are just replacing phenomenological terms with statistical ones. That's not an explanation. It's equivocation.Marchesk

    I don't understand what you mean by this. Perhaps you could clarify with some examples from Seth's papers?
  • Isaac
    10.3k
    in your case, you interpret pretty much everything in terms of a scientific framework - objective facts, satisfactory explanations and so on. That is internalised in such a way that it becomes second nature to you.Wayfarer

    ...and you know this how?
  • Wayfarer
    22.5k
    From reading your posts. I could provide examples but please save me the trouble.
  • Wayfarer
    22.5k
    Look, everyone, get this: you can't explain consciousness, because consciousness is the source of any and all explanation. Get over it, and find something else to discuss.

    I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness. — Max Planck

    Over and out!!
  • Isaac
    10.3k
    So they're "mutually exclusive" in terms of being qualitatively distinct variable types in the variable network of the model, but they're not thereby causally or statistically independent of each other since they're connected.fdrake

    Cool, that makes sense. We have common ground here.

    It isn't as if the hidden states are inputs into a priorless, languageless, taskless system, the data streams coming out of the hidden state are incorporated into our mature perceptual models. In that respect, it does seem appropriate to say that the hidden states do cause someone to see a rabbit or a duck, as one has fixed the status of the whole model prior to looking at the picture.fdrake

    I see what you mean here. If at any given time the only variable that really is 'varying' in the system is the hidden state, then we can appropriately talk about a direct causal relationship. Like triggering a pinball, the various flippers and pegs are going to be determinate of it's path, but they're fixed, so right now it's path is directly caused by the strength of the trigger?

    So, if we want to answer the question "what are people modelling?" I think the only answer can be 'hidden states', if they were any less than that then the whole inference model wouldn't make any sense. No-one 'models' and apple - it's already an apple.

    But...

    If we're talking about the properties of those hidden sates which constrain the model choices...

    So the issue of the degree of "fuzziness" associated with labelling hidden state patterns with perceptual feature names comes down to the tightness of the constraint the hidden states place upon the space of perceptual features consistent with it and the nature of those constraints more generally.fdrake

    I'd agree here. Do you recall our conversation about how the two pathways of perception interact - the 'what' and the 'how' of active inference? I think there's a necessary link between the two, but not at an individual neurological level, rather at a cultural sociological level. All object recognition is culturally mediated to an extent, but that cultural categorising is limited - it has functional constraints. So whilst I don't see anything ontological in hidden states which draws a line between the rabbit and the bit of sky next to it, an object recognition model which treated that particular combination of states as a single object simply wouldn't work, it would be impossible to keep track of it. In that sense, I agree that properties of the hidden sates have (given our biological and cultural practices) constrained the choices of public model formation. Basically, because the dorsal pathways activities in object manipulation etc will eventually constrain the ventral pathways choices in object recognition, but there isn't (as far as we know) a neurological mechanism for them to do so at the time (ie in a single perception event).

    A little of what we know. Object recognition in adults is mediated by two systems. A high level one which relates to naming, conceptual properties (such as use, ownership etc) and manipulation. But we also had what's called a mid-level system which is responsible for object tracking and enumeration. This system appears (in adults) to be independent (broadly meaning lesions in it can cause independent issues). Here spatiotemporal signal are king (what moved relative to what), things like edge, colour, shape etc play a secondary role in the case of stationary objects. But none of this directly informs the higher levels system (at the time of a single perception event). The higher level system is extremely culturally mediated, and is very difficult to actually change by perceptual features alone.

    Experiments on surprise and attention in infants have indicated that their own object enumeration relies heavily on spatiotemporal markers and so it seem likely that this system is the primary object division system and the higher-level one is secondary. Interestingly, infants as young as 2 months show strong object recognition in this primary mid-level system, but not until 18-24 months do they have an equivalent grasp of object recognition in the higher system.

    So higher level it might first go... hidden state properties > some constrained model space > cultural/biological modelling process > object christening

    Then in an instance of perception... hidden state properties > some (now constrained) choice of public models > object recognition

    But mid-level it would go hidden state spatiotemporal properties > (possibly drawing on other mid level properties - shape, edge etc) > object enumeration > object recognition

    In order to have this model we have to have a cognitive facility to model new data (cultures are groups of people after all), so in that sense I agree that

    language use plays some role in perceptual feature formation - but clearly it doesn't have to matter in all people at all times, just that it does seem to matter in sufficiently mature people.fdrake

    We may disagree as to the extents, but I think we have common ground on the general process.
  • magritte
    553
    I regard consciousness as fundamental. I regard matter as derivative from consciousnessWayfarer
    And also the other way around. Kant have either one without the other.

    infants as young as 2 months show strong object recognition in this primary mid-level system, but not until 18-24 months do they have an equivalent grasp of object recognition in the higher system.
    So higher level it might first go... hidden state properties > some constrained model space > cultural/biological modelling process > object christening
    Isaac

    A 2-month old is a different animal from an 18-24 month old. Adding another circle to the model can't do justice to the phenomenon or to the statistics. For a moment, consider comparative psychology of infants, apes, cats. Apes and cats are comparatively smarter than infants at the earliest stages. If you can agree even to a degree then how could that be?
  • Wayfarer
    22.5k
    Kant have either one without the other.magritte

    Take it up with Max.

    Look, the fundamental issue, the basic problem, whatever, is that all modern science - big statement! - relies on objectification. Newton, Galileo, Descartes, et al, perfected the method for mathematisation of statements about objective phenomena. It is the universal science, in that it can cope with any kind of object. But mind is not an object. If that is not obvious, then let’s get into a multi-hundred page Internet forum thread about why it’s not. Although I might sit it out.
  • magritte
    553
    the fundamental issue, the basic problem, whatever, is that all modern science - big statement! - relies on objectification. ... But mind is not an object.Wayfarer
    :100:
    Which is why social sciences are so difficult. There are very few convenient object names, like apple, to anchor isolated changing processes.
  • khaled
    3.5k
    What evidence do you have that that's what you did? You learnt to use 'red' at, what, two, three? Are you suggesting you have a clear memory of the method you used?Isaac

    Be reasonable. What use is it asking the question if the reply was going to be: Actually, you don't remember. I just generalized to how I learn new words that have an associated experience. For instance, in my language there was no word for "malencholy". What I said here is how I came to learn the word and the experience associated with it, at a much later age.

    You didn't say 'the world seems like something'. You said ''...seems like X". I'm saying, for example, that the evidence from cognitive science suggests that it cannot have seemed like X. It must have seemed like Y, or Z. You're simply reporting, post hoc, that it seemed like X because of your cultural models which encourage you to talk about experiences in this way.Isaac

    Basic algebra tells you that X can take on any value including Y or Z. Point is that it seemed like something. I later call it "red" or "pain" or whatever.

    I'm trying to argue that they are not as you, seconds later, think they were.Isaac

    Agreed.

    They don't work the same way, the inaccuracies are built in to the mechanism, it happens instantly, as a result of hippocampus function, not long term as a result of action potential changes.Isaac

    As far as I can tell, the working memory and sensory memory are the source of experiences. As in if they stopped funcitoning, you wouldn't have any experiences at all. What you're saying here is that I had the experience Y first which was then altered to a different experience X due to built in inaccuracies. That doesn't make sense, what is this experience Y? All I ever see is the experience X. There is no "more accurate" experience Y that preceded it.

    If I am measuring something and it turns out to be 5cm you cannot make the claim "Actually, you made a more accurate measurement which was then changed to 5cm +- 0.1cm due to the built in inaccuracy of the ruler".

    Conscious experience is invoked in AI, physicalism, the limits of knowledge...Isaac

    Can't AI also have a certain experience then reach for the word "red" to describe it?

    At no point do I have a 'feeling of a colour' which I then select the name for from some internal pantone chart.Isaac

    But you said that you experience something, then reach for the word "red" to describe it. I am asking how we can compare these "somethings".
  • Marchesk
    4.6k
    ook, the fundamental issue, the basic problem, whatever, is that all modern science - big statement! - relies on objectification. Newton, Galileo, Descartes, et al, perfected the method for mathematisation of statements about objective phenomena. It is the universal science, in that it can cope with any kind of object. But mind is not an object. IWayfarer

    Makes me wonder how Tegmark thinks the mind fits into math. He's fond of arguing that everything that exist is mathematical, and all mathematical objects exist.
  • khaled
    3.5k
    Neurology is a discipline that tells us much about how conscious experience happens.creativesoul

    We're asking why.

    Newtonian mechanics doesn't care about why its components exist, it is a study of how they interact. Same with phenomenology

    Name these components of which all conscious experience consists.
    creativesoul

    I haven't read phenomenology books. Couldn't tell ya.

    It tells us nothing except that each and every experience is unique, and that no report regardless of first or third person perspective can be complete.creativesoul

    So you have ineffable private experiences. As I said a while ago, the way you use "experiences" is nearly identical to the way people use "qualia".
  • magritte
    553
    Basic algebra tells you that X can take on any value including Y or Z. Point is that it seemed like something. I later call it "red" or "pain" or whatever.khaled

    You aren't entitled to just call something pain any more than calling something duh.
  • Wayfarer
    22.5k
    Makes me wonder how Tegmark thinks the mind fits into math. He's fond of arguing that everything that exist is mathematical, and all mathematical objects exist.Marchesk

    Yes I don’t understand how he can still be materialist but he apparently is.
  • Olivier5
    6.2k
    you can't explain consciousness, because consciousness is the source of any and all explanation.Wayfarer

    This is the core of the issue, and probably why we think it’s hard, but I am not yet convinced that the human mind is unable to understand itself.

    Yes I don’t understand how he can still be materialist but he apparently is.Wayfarer

    There are logically coherent forms of materialism, that consider the mind as physically mediated, created by the brain, but not an illusion. Instead, the mind is seen as an effective organ, useful to the survival of the individual. Aka compatibilism. So one can be a non-naïve, coherent materialist if one includes the human mind in ‘matter’, as something that literally ‘matters’.
  • Wayfarer
    22.5k
    This is the core of the issue, and probably why we think it’s hard, but I am not yet convinced that the human mind is unable to understand itself.Olivier5

    But it's problem of reflexivity. 'The eye can see another, but not itself. The hand can grasp another, but not itself.' That actually is from the Upaniṣads, and it's an observation which I don't think has a parallel in Western philosophy, but it's an extremely important principle.

    I don't know if you're aware of a French scholar by the name of Michel Bitbol. He has some very interesting and relevant insights into this issue - see his paper It is never known but it is the knower.

    There are logically coherent forms of materialism, that consider the mind as physically mediated, created by the brain,Olivier5

    I think 'created by' is an issue. It's a question of ontological dependency. We instinctively see the mind as 'created by' or 'a product of' the material, but I'm not so sure. If I was a good enough story-teller, I could tell you something that effected your physiology - your 'blood would run cold' or maybe you would become angry and your adrenaline would kick in. That is 'mind over matter' on a very small scale, but the principle applies in all kinds of ways.
  • khaled
    3.5k
    If I was a good enough story-teller, I could tell you something that effected your physiology - your 'blood would run cold' or maybe you would become angry and your adrenaline would kick in. That is 'mind over matter' on a very small scale, but the principle applies in all kinds of ways.Wayfarer

    But you couldn't tell that story without vibrations in the air hitting his eardrums which go on to have the effect you perscribe. I think it's always "mind along with matter" never "mind over matter". And especially not "just matter", if the word "matter" is to mean anything.
  • creativesoul
    11.9k
    What problem?

    Neurology is a discipline that tells us much about how conscious experience happens.
    — creativesoul

    Yeah, but as Luke in this thread (and Chalmers elsewhere) have pointed out, it doesn't explain why any physical system would be conscious.
    Marchesk

    Chalmers' "what it's like" rendering is an untenable and rather ill-informed approach. I've argued that at length on this very forum. There is no such singular thing as "what it's like" to be human.

    Our conscious experience(being human) is an ongoing process that is directly and indirectly influenced by, and consisting of, all sorts of different things all the time. It is an autonomous process, one of which we have little to no control over, to very large degree. That said, each and every moment of our lives counts as "what it's like to be human", and this alone poses a huge problem, obviously enough I would hope, for anyone who aims at defining "what it's like to be human", for being human is not like any single excised duration within our lives. It's exactly like all of them, but they are each respectively different. Thus, the notion is incoherent at best. It's untenable. Our conscious experience consists of all moments during our lives, and each and every duration is unlike the rest for each and every one consists of some elements that the others do not. Being human is all of them.

    Moreover, to labor the point by introducing changes in our thought and belief systems, because the way we think about what's happening changes over time(along with changes in our belief system) and the way we think about things affects/effects conscious experience, even our experiences involving the same sorts of things changes over time as well, despite the recurrence of some of the elements.

    Drinking Maxwell House at time t1 is a much different experience than drinking Maxwell House at time t20,000 if along the way one gradually begins to enjoy the experience less and less unbeknownst to themselves at first. This will certainly happen as a result of the taster drinking 100% Kona coffee freshly ground and prepared with a French press at some time during their lives, and then continuing to drink Kona coffee more and more afterwards. We can replace Kona coffee and the preparation process with any other, and the point holds.

    All of this places the notion of "what it's like to be a human" under rightful suspicion regarding it's ability to even provide an outline for our conscious experience, for what coffee tasting is like at time t1 is not what coffee tasting is like at time t20,000, even without the introduction of Kona coffee. The very same issues arise with any and all conscious experiences of 'X' at different times. Variables fundamentally change the experience.


    Our understanding of physics would not predict this if we weren't already conscious.Marchesk

    This seems irrelevant to me, although I'd be happy to entertain an argument for how it is.

    Some folk hereabouts seem to think that we cannot acquire knowledge of our own conscious experience, simply because we must use it as a means for doing so. They've adopted this fait accompli attitude about the subject. There's a similar vein of thought pervading philosophy of language and 'getting beneath language'. I've found that that's not an insurmountable problem at all, actually, in either respect. The method of approach matters most in such metacognitive endeavors, and that method must include adequate minimal(universal) standards and criterions which must be determined first and satisfied accordingly throughout the endeavor.

    Unfortunately, attention spans are required, and seem to be lacking...

    It's really no different(roughly speaking) than acquiring knowledge about anything that exists(existed) in it's entirety prior to our awareness and/or subsequent accounting practices of it. Conscious experience is one such thing.



    A nervous system wouldn't fundamentally be different than a computer with input devices, in that regard.Marchesk

    This breaches another topic, but perhaps it's worth touching upon...

    On my view, nervous systems aren't fundamentally conscious. They are most certainly fundamentally different than computers. I would not even go as far as to say that a human being is fundamentally conscious, at least not from the moment of conception through the first completely autonomous correlation drawn between different things.

    This skirts around the issue of where to 'draw the line', so to speak, which again harks back to the aforementioned criteria.



    Why do we see colors and feel pain when no other physical system does this, far as we can tell? What would it take for a robot to do so? Did Noonien Soong sliip a qualia chip into Data's positronic brain?Marchesk

    Animals do. They are physical systems, in part at least, just like we are.

    What would it take for a robot to see colors and feel pain? Probably biological machinery capable of doing so. At least, that's my guess.
  • Mww
    4.9k


    You invoke consciousness, I invoke reason. The same intrinsic circularity is patently inevitable.

    Nature of the beast.
  • creativesoul
    11.9k
    It tells us nothing except that each and every experience is unique, and that no report regardless of first or third person perspective can be complete.
    — creativesoul

    So you have ineffable private experiences.
    khaled

    Sigh...



    ...the way you use "experiences" is nearly identical to the way people use "qualia".khaled

    If you believe that, then you clearly do not understand much of what I've said, and until I have reason to believe that you do, there's no reason for me to continue our discussion, for it seems to have been a waste of time. Hopefully some other reader gets something out of it. Best, of course, if you do...

    Respectfully, be well.

    :smile:
  • creativesoul
    11.9k
    The same intrinsic circularity is patently inevitable.

    Nature of the beast.
    Mww

    Yep. That's where a major disagreement between you and I seems to be. We've discussed that at length in past, but as it pertains to conscious experience, could you explain how we cannot use reason to acquire knowledge of our own conscious experience?

    Why, or how is it fait accompli?
  • Mww
    4.9k
    We've discussed that at lengthcreativesoul

    Yeah....I was wondering which of us would break the dialectical ice.

    I will begin by saying for the record, you are soooo close in your reasoning, to my own. As before, the only thing missing, and the potential source of complete affirmation or possibly negation.....gotta allow that, after all....., is method. As far as I’m concerned, existential dependency and elemental constituency are given, but I want to know what they are and HOW they are given. I can tell you, from a very particular speculative methodology, but you haven’t told me. I grant you may find mine untenable, if not inadequate, but at least you have something to judge.

    I submit for your esteemed consideration, we cannot use reason to acquire knowledge of consciousness, because reason invented it. The very best we can do, is use the notion of consciousness in such a way that it does not contradict its own invention. And the best way to use it, is, not as a thing to know about, but as a necessary condition for something we do know about.

    Perhaps you recognize that last sentence.

    Robotic voiceover: “...Shall..we..play..a......game?”
  • Olivier5
    6.2k
    But it's problem of reflexivity. 'The eye can see another, but not itself. The hand can grasp another, but not itself.' That actually is from the Upaniṣads, and it's an observation which I don't think has a parallel in Western philosophy, but it's an extremely important principle.Wayfarer

    But a hand can hold another hand, and an eye can see another eye...

    I don't know if you're aware of a French scholar by the name of Michel Bitbol. He has some very interesting and relevant insights into this issue - see his paper It is never known but it is the knower.

    Never heard of him, will check out.

    There are logically coherent forms of materialism, that consider the mind as physically mediated, created by the brain,
    — Olivier5

    I think 'created by' is an issue. It's a question of ontological dependency. We instinctively see the mind as 'created by' or 'a product of' the material, but I'm not so sure. If I was a good enough story-teller, I could tell you something that effected your physiology - your 'blood would run cold' or maybe you would become angry and your adrenaline would kick in. That is 'mind over matter' on a very small scale, but the principle applies in all kinds of ways.

    Underwritten by the brain, if you prefer. Information is always ‘written’ on something, it has to be the form of something material, in order to exist materially. A poem is not paper and ink, but it has to be written in paper and ink (or another material support) in order to exist.

    If matter can affect minds (and it can), then minds can affect matter, by the principle of action-reaction.

    In the type of biology-centred ‘emergent materialism’ I practice, ‘mind over matter’ is the only possible raison d’être of minds. By that I mean that if nature created something as bizarre as minds, it must be for a reason. Minds must be able to do something special, have some sort of value-added that living creatures without it are necessarily lacking. My hypothesis is that the mind is simply the pilot in the creature. It follows that the greater the freedom of movement of the creature, the greater the need for a mind. A plant moves less than an animal, and has far less need for a mind than an animal. A vegetative animal (e.g. a corral remaining in the same place, or any bivalve mollusk attached to its rock) has less need for a mind than an octopus. De facto, cephalopodes (octopuses, cuttlefish etc) have far larger brains than any other mollusk species, because they can move a lot of arms (8). Now, if brains underwrite minds, cephalopodes have bigger minds than all other invertebrates.

    Not as l large as birds, who can fly. I envy them a bit for that... :-)
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.