• Marchesk
    4.6k
    We can use our models and shared language to report the state of our models and shared language. Saying "Ah, but your conclusion is just a model too" isn't sufficient on its own to undermine anything.Isaac

    Our models and shared language include private and subjective. The mental talk is part of ordinary language. What's ironic here is that ordinary language philosophy in the form of a certain interpretation of Wittgenstein is being used to discount the ordinary talk of mental states.
  • Isaac
    10.3k
    Can computers describe their own calculations in detail, bit by bit? Or do they only report the results of theses calculations, at points specified in the program? It makes a difference.Olivier5

    I'm not a computer scientists, so if there's some technical issue I'm unaware of then maybe this would be difficult, but I can't see the intrinsic barrier. Ctrl+esc gives me a rundown of the cpu's occupation, this, despite the fact that the cpu must be in use running the program which works out how 'in use' the cpu is.
  • Isaac
    10.3k
    No you don't. You think and wonder using neurons. You talk using language. — Isaac


    Inner dialog doesn't exist? I hear my thoughts in words.
    Marchesk

    Inner dialogue is talking, no?
  • Marchesk
    4.6k
    Inner dialogue is talking, no?Isaac

    Private conversation.
  • Marchesk
    4.6k
    I'm not a computer scientists, so if there's some technical issue I'm unaware of then maybe this would be difficult, but I can't see the intrinsic barrier. Ctrl+esc gives me a rundown of the cpu's occupation, this, despite the fact that the cpu must be in use running the program which works out how 'in use' the cpu is.Isaac

    Sure, but it's not telling you what the CPU hardware is actually doing. And binary is an abstraction of electricity being moved around through logic gates with high and low voltages.
  • frank
    16k
    I don't see what that's got do do with the metaphor. All I'm saying is that computers can use their internal calculation mechanisms to report the state of that same mechanism.

    We can use our models and shared language to report the state of our models and shared language. Saying "Ah, but your conclusion is just a model too" isn't sufficient on its own to undermine anything.
    Isaac

    So we never exit the realm of models, right? We just deal in models of models, and models of modeled models.

    This language implies the thing that's being modeled (the thing in itself), but that's forever beyond our reach. Is that a fair assessment of your view?

    BTW, you frequently seem to be putting a homunculus in "the brain" which interprets signals. Maybe that's just a result of the nontechnical language you're using.
  • Olivier5
    6.2k
    Ctrl+esc gives me a rundown of the cpu's occupation,Isaac

    It doesn't give you a run down of the detail of its calculations though. To do that, the CPU would need to know what it is calculating while it is calculating. IOW it would need to be self aware.

    So I am afraid that this part of your metaphor doesn't work:

    I can ask a computer to print out the actual binary of its last calculation.Isaac
  • Isaac
    10.3k
    Inner dialogue is talking, no? — Isaac


    Private conversation.
    Marchesk

    Maybe, but that's not the same thing as thinking. Otherwise "I'm thinking of a word" wouldn't make any sense.
  • Isaac
    10.3k
    Sure, but it's not telling you what the CPU hardware is actually doing. And binary is an abstraction of electricity being moved around through logic gates with high and low voltages.Marchesk

    It doesn't give you a run down of the detail of its calculations though. To do that, the CPU would need to know what it is calculating while it is calculating. IOW it would need to be self aware.Olivier5

    So? Is this not doing exactly that? https://developer.android.com/studio/profile/cpu-profiler
  • Isaac
    10.3k
    So we never exit the realm of models, right? We just deal in models of models, and models of modeled models.

    This language implies the thing that's being modeled (the thing in itself), but that's forever beyond our reach. Is that a fair assessment of your view?
    frank

    To a degree. The only thing I'd say is that I don't consider the 'thing in itself' to be beyond our reach. I think a model is us reaching it. There's no more 'it-ness' than the impact on our models. Not like there's a 'really real' tree out there and all we have is approximations to it. Out approximations are the tree, the 'really real' one, the hidden states that cause us to model a 'tree' are revealed to us by our sense organs as they 'really' are. We might not be able to sense all there is to be potentially sensed, but that doesn't make what we do sense less real.
  • Isaac
    10.3k
    BTW, you frequently seem to be putting a homunculus in "the brain" which interprets signals. Maybe that's just a result of the nontechnical language you're using.frank

    Yes, flitting between 'you' (meaning the entity producing self-reports) and 'your brain' (meaning that which neuroscientists can see) is an activity prone to errors of translation, of which I may well have made several in my numerous posts. If you spot any glaring ones...
  • frank
    16k
    To a degree. The only thing I'd say is that I don't consider the 'thing in itself' to be beyond our reach. I think a model is us reaching it. There's no more 'it-ness' than the impact on our models. Not like there's a 'really real' tree out there and all we have is approximations to it. Out approximations are the tree, the 'really real' one, the hidden states that cause us to model a 'tree' are revealed to us by our sense organs as they 'really' are. We might not be able to sense all there is to be potentially sensed, but that doesn't make what we do sense less real.Isaac

    I'm not understanding why you wouldn't extend the same attitude toward the psyche.

    I know why Banno doesn't: his materialist ontology governs his entire outlook. Is that your reason as well?

    Yes, flitting between 'you' (meaning the entity producing self-reports) and 'your brain' (meaning that which neuroscientists can see) iIsaac

    Entity just means thing. The thing that produces self reports is what? The whole organism?
  • Isaac
    10.3k
    I'm not understanding why you wouldn't extend the same attitude toward the psyche.frank

    I do. The signals are invariant, but the structures we generate with them (the models) are themselves socially constructed yet, being based on the same hidden states, no less 'real' for that.

    I'm not arguing that mental states aren't real, only that they're socially constructed (to the extent that I'm arguing, anyway).

    Entity just means thing. The thing that produces self reports is what? The whole organism?frank

    Yeah. The thing containing the mouth they come out of. Nothing philosophically deep, I'm afraid.
  • frank
    16k
    m not arguing that mental states aren't real, only that they're socially constructed (to the extent that I'm arguing, anyway).Isaac

    I see. I guess I misunderstood you. So it's not a problem for you that the moon, for instance, is also socially constructed. :up:
  • Olivier5
    6.2k
    Apparently it records CPU usage but not each binary step.
  • Luke
    2.6k
    I typically think and wonder using language.
    — Luke

    No you don't. You think and wonder using neurons. You talk using language.
    Isaac

    Well, it seems to me like I think and wonder in language, if that's any different. I'm never aware of myself thinking and wondering using neurons.

    what I'm consciously aware of does not have the nature of, or is not in the form of, a brain signal
    — Luke

    It obviously does.
    Isaac

    How is it obvious? I know what I'm consciously aware of and it isn't brain signals. And neither is it in the form of brain signals.

    But you seem to be missing the point I raised a few posts back (Shakespeare/Milton example). Common use of 'about', or 'of' when it comes to awareness assumes one can be wrong in identifying the object.Isaac

    I think there are two different meanings of "awareness" at work here, and both are "common use". You want to restrict "awareness" to mean (only) "knowledge", such as with your Shakespeare example. Although I do find the question "Are you aware of the works of Shakespeare?" somewhat odd. It seems more natural to ask "Do you know the works of Shakespeare?" Nevertheless...

    On the other hand, I'm using "awareness" to mean "present to mind" or simply "conscious (of)". I don't believe this is an uncommon usage. Merriam-Webster defines "conscious" as "perceiving, apprehending, or noticing with a degree of controlled thought or observation". The Wikipedia article on Awareness opens with: "Awareness is the state of being conscious of something."

    Yet here you want to say that whatever you think is the object of your awareness just is, purely by virtue of the fact that you think it is. That seems contrary to the way we use the expression in all other areas.Isaac

    In terms of conscious awareness, the fact of the matter is whatever is present to one's mind or whatever one is conscious of, including one's own sensations/feelings/perceptions. It needn't be public knowledge nor amenable to public correction.

    But I said "...because they're connected to the part of your brain for which activity therein is what we call 'conscious awareness'". That's how.Isaac

    All you have done here is to identify brain activity with conscious awareness; it doesn't explain how you are conscious of your brain activity. As I said earlier: "The awareness of my arm movements might be the result of my brain function, but that doesn't mean I have awareness of my brain function".

    The process by which you become aware is as described, but it is absolutely evident that it is not 'your arm' that you become aware of.
    — Isaac

    Then what is it that you are aware of?
    — Luke

    We could say neural signals, or we could perhaps also talk about models, or features of perception to get away from neuroscience terms.
    Isaac

    Since you didn't actually apply any of this to your arm example, I don't see how it helps. Clearly, the person is aware of - that is, conscious of - their arm being in a particular location, even though their arm isn't in that location. Otherwise, what did you mean by the italicised part of "you think your arm is doing one thing, but it's actually doing another"?
  • Isaac
    10.3k
    Apparently it records CPU usage but not each binary step.Olivier5

    The point is it is using the CPU to report data about the CPU. That's all. It's presented only in opposition to the claim that we cannot use a model to report on our modelling process. We obviously can.
  • Isaac
    10.3k
    Well, it seems to me like I think and wonder in language, if that's any different. I'm never aware of myself thinking and wondering using neurons.Luke

    No, I don't suppose you would be. I don't suppose you're aware of your kidney's functioning either, but that doesn't mean they don't. again, what you have to take on board (if you agree with the science of course - it is speculative after all), is that what you think is happening is a post hoc narrative put together after the actual event. Let's take a really simple example - turning a light switch on. When you switch the switch you no doubt think that you switched the switch, then you saw the light come on. You didn't. You saw the light come on before you felt you'd switched the switch. The signals from your finger took longer to get to your brain than the light took. Your occipital cortex processed the data from the light bulb before it even received the data from your finger. It sent signals to the parts of your brain dealing with object recognition and spatiotemporal response, all before the signal from your finger had even arrived. These even reached the same areas of your brain usually involved in conscious processing...then the data from your finger arrived. this new data was sent to the part of your brain dealing with sequencing (episodic memory). It then sent backward acting neural signals which suppressed sequencing data in the original message coming from the occipital cortex. It changed the story to make it seem like you saw the light after you switched the switch, because that's what you were expecting to see (based on what you know about switches and lights, cause and effect).

    So what actually happens in areas we know deal with specific sub-parts of processing the stream of conscious awareness is not what you later recall happened. If we artificially interrupt this sequence we can get you to think the initial sequence is what happened. It did enter your conscious steam of thought, you just revised it a few milliseconds later. Your introspection does provide you with an accurate picture even of your stream of conscious awareness. It provides you with a heavily filtered, selective and occasionally flat out made-up narrative of what just happened. Most of that re-telling of the story is affected by models picked up in childhood - ie culturally mediated, public data.

    It seems you think in language because you've been enculturated into modelling your thoughts that way. Whatever goes on in your brain, you're going to post hoc re-tell the narrative to fit the model you're expecting it to fit, in this case "all my thoughts were words". It may or may not work with you, but one introspective exercise that sometimes is revealing is to work out some relatively complex strategy - say how you're going to negotiate an obstacle course, or solve a bit of first order logic. Then try to recall (if you really think in words) the exact sentences you used to think that through, word-for-word. Most people can't.

    You think too fast to form full sentences, but we're so embedded in language that the language centres of our brains convert the stuff we think into words as we go assuming we might need to communicate it at any moment. Since the thoughts are too fast, it only has time to select a few key words - hence the incomplete sentences. Your brain (if it has been enculturated to do so) interprets this association as 'thinking in words' and so it suppresses the data with the alternate sequencing because it's not expecting it. You end up with the narrative that you thought in words.

    I think there are two different meanings of "awareness" at work here, and both are "common use".Luke

    Yes. I've not made clear what I was trying to do there. I'm not disputing that your use is common. What I was trying to highlight is the (what I believe is unjustified) special pleading with which 'awareness' os used differently with regards to the mind than in all other cases. I don't dispute it's common use, I dispute it's revealing anything useful about the way the mind works. It's a comforter, not an insight.

    All you have done here is to identify brain activity with conscious awareness; it doesn't explain how you are conscious of your brain activity.Luke

    This is another way in which mind-talk uses 'special pleading'. If you ask me how the petrol gets to the engine in a car, a description of the fuel pump and line is usually considered to have answered the question 'how?'. Maybe a description of pressure in a closed space might be added if necessary. Whenever minds are introduced, suddenly any description of process has no longer answered the question 'how?'. it's like one can forever ask - "but how does all that lead to conscious awareness?" and absolutely anything said will be unsatisfactory. What would an answer look like, to you. Give me an example answer to the question "how are you conscious of your brain activity?" that you would accept as a satisfactory series of steps.
  • Olivier5
    6.2k
    The point is it is using the CPU to report data about the CPU. That's all. It's presented only in opposition to the claim that we cannot use a model to report on our modelling process. We obviously can.Isaac

    We can stipulate in the code (or add in some parallel code) one or several reporting routines that regularly outputs a certain data set, following certain milestones. So the CPU can be monitored through regular reporting of some data set eg 10 times per second, but one cannot access the actual electric currents inside the CPU that produce these data sets, as they happen. Of course one could hypothetically reconstruct them by parsing together the reported data, but not empirically record the physical events in the CPU. At least not in your random PC. Whether the technology exists in the lab, I am not aware of it.

    To my mind this is important because I actually find useful the computer metaphor for the human mind. Not that people are just computers but they are also computers. We human beings can compute, in fact we invented computing in a way, and then taught it to machines.

    So what does this mean for the mind-body problem? Maybe that our consciousness cannot access the physical, neuronal processes underlying it; it can only access periodic reports from such neuronal processes. Eg visual, audio or pain reports.
  • Isaac
    10.3k
    one cannot access the actual electric currents inside the CPU that produce these data sets, as they happen.Olivier5

    So? I don't see the relevance. No-one here is suggesting that the neuroscientific data gathered is interpreted in real time as it's being generated.

    our consciousness cannot access the physical, neuronal processes underlying it; it can only access periodic reports from such neuronal processes. Eg visual, audio or pain reports.Olivier5

    Exactly what I've been arguing with Luke.
  • Olivier5
    6.2k
    our consciousness cannot access the physical, neuronal processes underlying it; it can only access periodic reports from such neuronal processes. Eg visual, audio or pain reports.
    — Olivier5

    Exactly what I've been arguing with Luke.
    Isaac

    Well, glad that's clarified.

    Come to think of it, the original metaphor was made here:

    It always baffles me that this this is seen as some coup de grace. "But the study of social constructs is itself just a social construct", "You're using rationality to work out the origin of rationality", "All metaphysics is nonsense is itself metaphysics"...

    It's just not the logical flaw people seem to assume it is. I can ask a computer to print out the actual binary of its last calculation. There's no problem at all ... Psychology's models of how the brain works (including that we model the world) is itself just a model of the world (in this case the brain bit of it). So what? What's the killer blow we must now succumb to because of that insight?
    Isaac

    The killer blow is that: IF the study of social constructs concludes that social constructs are possible, reasonable, useful and improvable (the Collingwood project if I understand well), then there is no problem, but IF one concludes from the study of social constructs that they are on the whole unreasonable fancies, then one has a problem of self-contradiction. Because the study of social constructs is itself a social construct, and if social constructs are fancy, the idea that they are fancy is itself fancy.

    Likewise, "all metaphysics is nonsense" is reflexive, and thus it is a self-contradictory statement. You will have recognized the paradox of the liar. We already spoke about it.

    And therefore... If models of how the brain works are in themselves just mental models (or social constructs), a model of how the brain works that concludes that mental models and/or social constructs are illusionary, fanciful or epiphenomenal is contradicting itself. Only neurological models that recognise (or better, explain) the utility of conscious human thoughts and social constructs can be asserted without running into internal contradictions.
  • Harry Hindu
    5.1k
    our consciousness cannot access the physical, neuronal processes underlying it; it can only access periodic reports from such neuronal processes. Eg visual, audio or pain reports.Olivier5
    This is like saying humans can't fly. Sure, they can't without any mechanical help, but they can with mechanical help. Our consciousness can access the underlying physical processes with a little mechanical/electronical help, by observing (a conscious activity) MRI images of our brain.

    One could also say that humans can't communicate without help of ink, paper, computers and air.

    It basically comes down to being a realist or solipsist

    Either there is a causal relationship of your mind with the world or there isn't. If there is then the relationship between cause and effect is information and effects (the state of your mind) carry information about their causes (the state of the world just prior to some mental state like the state of some internet philosophy forum post as you begin to read it).

    Either your internet forum post contains information about one of your prior mental states or it doesn't. If it doesn't then we're not communicating and you are just a figment of my imagination.
  • Olivier5
    6.2k
    Either there is a causal relationship of your mind with the world or there isn't. If there is then the relationship between cause and effect is information and effects (the state of your mind) carry information about their causes (the state of the world just prior to some mental state like the state of some internet philosophy forum post as you begin to read it).Harry Hindu

    I agree, but it goes both ways: the state of my mind also determines what I will physically do, like when one decides to do or write something.
  • Mww
    4.9k
    And therefore...Olivier5

    Circular. Not contradictory. With respect to illusion.

    1.) The mental model of the brain.....

    that determines brain workings.....
    which determines mental models to be illusory....

    ....must therefore be illusory.

    1A.) An illusory mental model of the brain....

    that determines brain workings....
    which determines mental models to be illusory....

    ....remains consistently illusory.

    This is the “killer blow” so far missed. It is human reason itself, the ground of everything human, that is intrinsically circular, therefore susceptible to an illusory conclusions. It is the nature of the rational beast, inevitable and irreconcilable, possible only to guard against, but never to eliminate.

    Science as a doctrine sets the ground for trying, but it is always a human that does science, so.....just more potential circularity.

    Mental models for brain workings that determine that mental models are impossible......is contradictory.
    ————-

    With respect to epiphenomenalism, science may eventually falsify the premise, empirically, but it is currently viable as an explanatory thesis, metaphysically, merely because we don’t possess knowledge sufficient to negate it, and while it violates the principle of cause and effect physically, it stands as non-contradictory from a purely logical domain.

    Rhetorically speaking.....
  • Isaac
    10.3k
    IF the study of social constructs concludes that social constructs are possible, reasonable, useful and improvable (the Collingwood project if I understand well), then there is no problem, but IF one concludes from the study of social constructs that they are on the whole unreasonable fancies, then one has a problem of self-contradiction.Olivier5

    Why must those be the only two options? By far the majority of work is in deciding which models are useful, coherent and which aren't.

    Likewise, "all metaphysics is nonsense" is reflexive, and thus it is a self-contradictory statement.Olivier5

    Indeed, but "all metaphysics except this is nonsense" is not, which is more the equivalent we have here.

    We have some really good neurological models. To discard them would require we discard some fairly fundamental models of the physical world and how we interact with it.

    The problem is those models don't match with the models of our own mental processes we developed prior to being able to examine the workings of their physical substrate. So the job is to make a better set.

    It's not about saying things are 'illusions' because they're models. It's about saying things are illusions because they're bad models.
  • Isaac
    10.3k
    1.) The mental model of the brain.....

    that determines brain workings.....
    which determines mental models to be illusory....

    ....must therefore be illusory.
    Mww

    Not true at all. It's only true if it determines mental models to be illusory because they're mental models. It can quite coherently determine them to be illusory for other reasons.
  • Mww
    4.9k


    Common courtesy mandates a response, so......Thanks.
  • Olivier5
    6.2k
    Why must those be the only two options? By far the majority of work is in deciding which models are useful, coherent and which aren't.Isaac

    That is the first option: the attempt to makes sense of social constructs (or mental processes) is potentially useful because social constructs (or mental processes) are sometimes reasonable, useful and improvable. In other words, we are talking of a non-eliminative theory of social constructs (or non-eliminative neurology).
  • Olivier5
    6.2k
    With respect to epiphenomenalism, science may eventually falsify the premise, empirically, but it is currently viable as an explanatory thesis, metaphysically, merely because we don’t possess knowledge sufficient to negate it, and while it violates the principle of cause and effect physically, it stands as non-contradictory from a purely logical domain.Mww

    If epiphenomenalism is true, then epiphenomenalism is an epiphenomenon.
  • Mww
    4.9k
    If epiphenomenalism is true, then epiphenomenalism is an epiphenomenon.Olivier5

    Yep. Circularity. Never provable, but refutable, by merely invoking different majors or minors.

    Start here, you get epiphenomenalism; start there, you don’t.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.