• Apustimelogist
    918
    That's only a problem for those that posit that intentionality is fundamental.noAxioms

    :up: :up: :100:
  • boundless
    584
    I would not buy that suggestion. More probably the intentionality emerges from whatever process is used to implement it. I can think of countless emergent properties, not one of which suggest that the properties need to be fundamental.noAxioms

    Ok. But if there is an 'emergence', it must be an intelligible process. The problem for 'emergentism' is that there doesn't seem any convincing explanation of how intentionality, consciousness and so on 'emerge' from something that does not have those properties.

    As I said before, that we have yet to find a credible explanation for such an emergence is an evidence against emergentism. Of course, such an absence of an explanation isn't a compelling evidence for the impossibility of an explanation.

    Anyway, I also would point out that IMO most forms of physicalism have a difficulty in explaining that composite objects can be 'distinct entities'.

    Thus illustrating my point about language. 'Intentional' is reserved for life forms, so if something not living does the exact same thing, a different word (never provided) must be used, or it must be living, thus proving that the inanimate thing cannot do the thing that it's doing (My example was 'accelerating downward' in my prior post).noAxioms

    Ok, thanks for the clarification. But note my point above.

    boundless: Ok, but if intentionality is fundamental, then the arising of intentionality is unexplained.noAxioms

    I misphrased this. I meant: if intentionality is fundamental then there is no need for an explanation.
    That would make time more fundamental, a contradiction. X just is, and everything else follows from whatever is fundamental. And no, I don't consider time to be fundamental.noAxioms

    Right, but there is also the possibility that ontological dependency doesn't involve a temporary relation. That is, you might say that intentionality isn't fundamental but it is dependent on something else that hasn't intentionality and yet there have not been a time where intentionality didn't exist (I do not see a contradiction in thinking that, at least).

    As an illustration, consider the stability of a top floor in a building. It clearly depends on the firmness of the foundations of the builing and yet we don't that 'at a certain point' the upper floor 'came out' from the lower.

    So, yeah, arising might be a wrong word. Let's go with 'dependence'.

    Again, why? There's plenty that's currently unexplained. Stellar dynamics I think was my example. For a long time, people didn't know stars were even suns. Does that lack of even that explanation make stars (and hundreds of other things) fundamental? What's wrong with just not knowing everything yet?noAxioms

    I hope I have clarified my point above. But let's use this example. Stellar dynamics isn't fundamental because it can be explained in terms of more fundamental processes. Will we discover something similar for intentionality, consciousness and so on? Who knows. Maybe yes. But currently it seems to me that our 'physicalist' models can't do that. In virtue of what properties might intentionality, consciousness and so on 'emerge'?

    That's what it means to be true even if the universe didn't exist.noAxioms

    Good, we agree on this. But if they are 'true' even if the universe or multiverse didn't exist, this means that they have a different ontological status. And, in fact, if the multiverse could not exist, this would mean that it is contingent. Mathematical truths, instead, we seem to agree are not contingent.
    Given that they aren't contingent, they can't certainly depend on something that is contingent. So, they transcend the multiverse (they would be 'super-natural').

    Maybe putting in intelligibility as a requirement for existence isn't such a great idea. Of course that depends on one's definition of 'to exist'. There are definitely some definitions where intelligibility would be needed.noAxioms

    If the physical world wasn't intelligible, then it seems to me that even doing science would be problematic. Indeed, scientific research seems to assume that the physical world is intelligible.

    It might be problematic to assume that the physical world is fully intelligible for us, but intelligibility seems to be required for any type of investigation.

    A made-up story. Not fiction (Sherlock Holmes say), just something that's wrong. Hard to give an example since one could always presume the posited thing is not wrong.noAxioms

    Ok. I would call these things simply 'wrong explanations' or 'inconsistent explanations' rather than 'super-natural', which seems to me to be better suited for speaking about something that transcends the 'natural' (if there is anything that does do that... IMO mathematical truths for instance do transcend the natural).

    Again, why is the explanation necessary? What's wrong with just not knowing everything? Demonstrating the thing in question to be impossible is another story. That's a falsification, and that carries weight. So can you demonstrate than no inanimate thing can intend? Without 'proof by dictionary'?noAxioms

    TBH, I thing that right now the 'virdict' is still open. There is no evidence 'beyond reasonable doubt' to either position about consciousness that can satisfy almost everyone. We can discuss about what position seems 'more reasonable' but we do not have 'convincing evidences'.

    That does not sound like any sort of summary of my view, which has no requirement of being alive in order to do something that a living thing might do, such as fall off a cliff.noAxioms

    OK, I stand corrected. Would you describe your position as 'emergentist' then?
  • wonderer1
    2.3k
    Ok. But if there is an 'emergence', it must be an intelligible process. The problem for 'emergentism' is that there doesn't seem any convincing explanation of how intentionality, consciousness and so on 'emerge' from something that does not have those properties.boundless

    The emergence of intentionality (in the sense of 'aboutness') seems well enough explained by the behavior of a trained neural network. See:



    This certainly isn't sufficient for an explanation of consciousness. However, in light of how serious a concern AI has become, I'd think it merits serious consideration as an explanation of how intentionality can emerge from a system in which the individual components of the system lack intentionality.

    Do you think it is reasonable to grant that we know of an intelligible process in which intentionality emerges?
  • boundless
    584
    thanks for the video. It seems interesting. I'll share my thoughts tomorrow about it.
  • Harry Hindu
    5.8k
    All this seems to be the stock map vs territory speach, but nowhere is it identified what you think is the map (that I'm talking about), and the territory (which apparently I'm not).noAxioms
    The map is the first-person view. Is the map (first-person view) not part of the territory?

    Very few consider the world to be a model. The model is the map, and the world is the territory. Your wording very much implies otherwise, and thus is a strawman representation of a typical monist view. As for your model of what change is, that has multiple interpretations, few particularly relevant to the whole ontology of mind debate. Change comes in frequencies? Frequency is expressed as a rate relative to perceptions??noAxioms
    I never said that people consider the world as a model. I said that our view is the model and the point was that some people (naive realists) tend to confuse the model with the map in their using terms like, "physical" and "material".

    You do understand that we measure change using time, and that doing so entails comparing the relative frequency of change to another type of change (the movement of the hour hand on the clock vs the movement of the sun across the sky)? Do you not agree that our minds are part of the world and changes like anything else in the world, and the time it takes our eye-brain system can receive and process the information compared to the rate at which what you are observing is changing, can play a role in how your mind models what it is seeing.

    So old glass flowing is not an actual process, or I suppose just doesn't appear that way despite looking disturbingly like falling liquid? This is getting nitpickly by me. I acknowledge your example, but none of it is science, nor is it particularly illustrative of the point of the topic.noAxioms
    :meh: Everything is a process. Change is relative. The molecules in the glass are moving faster than when it was a solid, therefore the rate of change has increased and is why you see it as a flowing process rather than a static object. I don't see how it isn't science when scientists attempt to find consistently repetitive processes with high degrees of precision (like atomic clocks) to measure the rate of change in other processes. QM says that measuring processes changes them and how they are perceived (wave vs particle), so I don't know what you mean by, "none of it is science".
  • boundless
    584
    Ok, I watched the video. Nice explanation of how machine learning works.

    Still, I am hesitant to see it as an example of emergence of intentionality for two reasons. Take what I say below with a grain of salt, but here's my two cents.

    First, these machines, like all others, are still programmed by human beings who decide how they should work. So, there is a risk to read back into the machine the intentionality of human beings who built them. To make a different example, if you consider a mechanical calculator it might seem it 'recognizes' the numbers '2', '3' and the operation of addition and then gives us the output '5' when we ask it to perform the calculation. The machine described in the video is far more complex but the basic idea seems the same.

    Secondly, the output the machine gives are the results of statistical calculations. The machine is being given a set of examples of associations of hand-written numbers and the number these hand-written numbers should be. It then manages to perform better with other trials in order to minimize the error function. Ok, interesting, but I'm not sure that we can say that the machine has 'concepts' like what we have. When we read a hand-written number '3' it might be that we associate it to the 'concept of 3' by a Bayesian inference (i.e. the most likely answer to the question: "what is the most likely sign that the writer wrote here?"). But when we are aware of the concept of '3' we do not perceive a 'vector' of different probabilities about different concepts.
  • noAxioms
    1.7k
    Ok. But if there is an 'emergence', it must be an intelligible process.boundless
    I deny that requirement. It sort of sounds like an idealistic assertion, but I don't think idealism suggests emergent properties.

    Right, but there is also the possibility that ontological dependency doesn't involve a temporary relation.
    Sure

    That is, you might say that intentionality isn't fundamental but it is dependent on something else that hasn't intentionality and yet there have not been a time where intentionality didn't exist
    I was on board until the bit about not being a time (presumably in our universe) when intentionality doesn't exist. It doesn't appear to exist at very early times, and it doesn't look like it will last.

    As an illustration, consider the stability of a top floor in a building. It clearly depends on the firmness of the foundations of the builing and yet we don't that 'at a certain point' the upper floor 'came out' from the lower.
    But it's not building all the way down, nor all the way up.

    Stellar dynamics isn't fundamental because it can be explained in terms of more fundamental processes.boundless
    But it hasn't been fully explained. A sufficiently complete explanation might be found by humans eventually (probably not), but currently we lack that, and in the past, we lacked it a lot more. Hence science.

    Will we discover something similar for intentionality, consciousness and so on?
    Maybe we already have (the example from @wonderer1 is good), but every time we do, the goalposts get moved, and a more human-specific explanation is demanded. That will never end since I don't think a human is capable of fully understanding how a human works any more than a bug knows how a bug works.

    That would be an interesting objective threshold of intelligence: any entity capable of comprehending itself.

    But currently it seems to me that our 'physicalist' models can't do that.boundless
    I beg to differ. They're just simple models at this point is all. So the goalposts got moved and those models were declared to not be models of actual intentionality and whatnot.
    We could do a full simulation of a human and still there will be those that deny usage of the words. A full simulation would also not constitute an explanation, only a demonstration. That would likely be the grounds for the denial.

    But if they are 'true' even if the universe or multiverse didn't exist, this means that they have a different ontological status. And, in fact, if the multiverse could not exist, this would mean that it is contingent.
    Agree with all that.

    Mathematical truths, instead, we seem to agree are not contingent.boundless
    Mathematics seems to come in layers, with higher layers dependent on more fundamental ones. Is there a fundamental layers? Perhaps law of form. I don't know. What would ground that?

    Given that they aren't contingent, they can't certainly depend on something that is contingent. So, they transcend the multiverse (they would be 'super-natural').
    Good point


    If the physical world wasn't intelligible, then it seems to me that even doing science would be problematic.
    Just so. So physical worlds would not depend on science being done on them. Most of them fall under that category. Why doesn't ours? That answer at least isn't too hard.

    There is no evidence 'beyond reasonable doubt' to either position about consciousness that can satisfy almost everyone.
    Agree again. It's why I don't come in here asserting that my position is the correct one. I just balk at anybody else doing that, about positions with which I disagree, but also about positions with which I agree. I have for instance debunked 'proofs' that presentism is false, despite the fact that I think it's false.

    I entertain the notion that our universe is a mathematical structure, but there are some serious problems with that proposition that I've not seen satisfactory resolution. Does it sink the ship? No, but a viable model is needed, and I'm not sure there is one. Sean Carroll got into this.

    Would you describe your position as 'emergentist' then?boundless
    Close enough. More of a not-unemergentist, distinct in that I assert that the physical is sufficient for emergence of these things, as opposed to asserting that emergence the physical is necessary fact, a far more closed-minded stance.

    Still, I am hesitant to see it as an example of emergence of intentionality for two reasons.

    First, these machines, like all others, are still programmed by human beings who decide how they should work.
    boundless
    This is irrelevant to emergence, which just says that intentionality is present, consisting of components, none of which carry intentionality.
    OK, so you don't deny the emergence, but that it is intentionality at all since it is not its own, quite similar to how my intentions at work are that of my employer instead of my own intentions.

    To make a different example, if you consider a mechanical calculator it might seem it 'recognizes' the numbers '2', '3'
    It recognizes 2 and 3. It does not recognize the characters. That would require a image-to-text translator (like the one in the video, learning or not). Yes, it adds. Yes, it has a mechanical output that displays results in human-readable form. That's my opinion of language being appropriately applied. It's mostly a language difference (to choose those words to describe what its doing or not) and not a functional difference.

    Secondly, the output the machine gives are the results of statistical calculations. The machine is being given a set of examples of associations of hand-written numbers and the number these hand-written numbers should be. It then manages to perform better with other trials in order to minimize the error function.
    Cool. So similar to how humans do it. The post office has had image-to-text interpretation for years, but not sure how much those devices learn as opposed to just being programmed. Those devices need to parse cursive addresses, more complicated than digits. I have failed to parse some hand written numbers.
    My penmanship sucks, but I'm very careful when hand-addressing envelopes.


    The map is the first-person view. Is the map (first-person view) not part of the territory?Harry Hindu
    I don't know what the territory is as you find distinct from said map.


    I said that our view is the model and the point was that some people (naive realists) tend to confuse the model with the map in their using terms like, "physical" and "material".
    Fine, but I'm no naive realist. Perception is not direct, and I'm not even a realist at all. A physicalist need not be any of these things.

    You do understand that we measure change using time
    Change over time, yes. There's other kinds of change.

    and that doing so entails comparing the relative frequency of change to another type of change
    Fine, so one can compare rates of change, which is frame dependent we want to get into that.

    Do you not agree that our minds are part of the world and changes like anything else in the world, and the time it takes our eye-brain system can receive and process the information compared to the rate at which what you are observing is changing, can play a role in how your mind models what it is seeing.
    I suppose so, but I don't know how one might compare a 'rate of continuous perception' to a 'rate of continuous observed change'. Both just happen all the time. Sure, a fast car goes by in less time than a slow car, if that's what you're getting at.

    Everything is a process. Change is relative. The molecules in the glass are moving faster than when it was a solid
    Well that's wrong. Glass was never a solid. The molecules in the old glass move at the same rate as newer harder glass, which is more temperature dependent than anything. But sure, their average motion over a long time relative to the window frame is faster in the old glass since it might move 10+ centimeters over decades. What's any of this got to do with 'the territory' that the first person view is supposedly a map of?
    Perhaps you mean territory as the thing in itself (and not that which would be directly perceived). You've not come out and said that. I agree with that. A non-naive physicalist would say that things like intentionality supervene on actual physical things, and not on the picture that our direct perceptions paint for us. I never suggested otherwise.

    therefore the rate of change has increased and is why you see it as a moving object rather than a static one.
    I see the old glass as moving due to it looking like a picture of flowing liquid, even though motion is not perceptible. A spinning top is a moving object since its parts are at different locations at different times, regardless of how it is perceived.
  • boundless
    584
    I deny that requirement. It sort of sounds like an idealistic assertion, but I don't think idealism suggests emergent properties.noAxioms

    If physical processes weren't intelligible, how could we even do science, which seeks at least an intelligible description of processes that allow us to make predictions and so on?

    I was on board until the bit about not being a time (presumably in our universe) when intentionality doesn't exist. It doesn't appear to exist at very early times, and it doesn't look like it will last.noAxioms

    I was saying that if there was a time when intentionality didn't exist, it must have come into being 'in some way' at a certain moment.

    But it hasn't been fully explained. A sufficiently complete explanation might be found by humans eventually (probably not), but currently we lack that, and in the past, we lacked it a lot more. Hence science.noAxioms

    I sort of agree. And honestly, I believe that everything is ultimately not fully knowable as a 'complete understanding' of anything would require the complete understanding of the context in which something exists and so on. Everything is therefore mysterious and, at the same time, paradoxically intelligible.

    Maybe we already have (the example from wonderer1 is good), but every time we do, the goalposts get moved, and a more human-specific explanation is demanded. That will never end since I don't think a human is capable of fully understanding how a human works any more than a bug knows how a bug works.noAxioms

    I don't know. Merely giving an output after computing the most likely alternative doesn't seem to me the same thing as intentionality. But, again, perhaps I am wrong about this. It just doesn't seem to be supported by our phenomenological experience.

    Mathematics seems to come in layers, with higher layers dependent on more fundamental ones. Is there a fundamental layers? Perhaps law of form. I don't know. What would ground that?noAxioms

    Yes, I think I agree here. Even natural numbers seem to be 'based' on more fundamental concepts like identity, difference, unity, multiplicity etc. But nevertheless the truths about them seem to be non-contingent.

    Good pointnoAxioms

    In my records, if you agree with that, you are not a 'physicalist'. Of course, I accept that you might disagree with me here.

    Just so. So physical worlds would not depend on science being done on them. Most of them fall under that category. Why doesn't ours? That answer at least isn't too hard.noAxioms

    If we grant to science some ability to give us knowledge of physical reality, then we must assume that the physical world is intelligible. Clearly, the physical world doesn't depend on us doing scientific investigations on it but, nevertheless, the latter would seem to me ultimately fruitless if the former wasn't intelligible (except perhaps in a weird purely pragmatic point of view).

    Agree again. It's why I don't come in here asserting that my position is the correct one. I just balk at anybody else doing that, about positions with which I disagree, but also about positions with which I agree. I have for instance debunked 'proofs' that presentism is false, despite the fact that I think it's false.noAxioms

    OK. I have a sort of similar approach about online discussions. Sometimes, however, I believe that it is simply impossible to not state one's own disagreement (or agreement) with a view in seemingly eccessively confident terms. Like sarcasm, sometimes the 'level of confidence' comes out badly in discussions and people seem more confident about a given thing than they actually are.

    Furthermore, I also believe that a careful analysis of a position one has little sympathy for actually can be useful to understand better and reinforce the position one has. I get that sometimes it is not an easy task but the fruits of such a careful (and respectful) approach are very good.

    Close enough. More of a not-unemergentist, distinct in that I assert that the physical is sufficient for emergence of these things, as opposed to asserting that emergence the physical is necessary fact, a far more closed-minded stance.noAxioms

    Not sure what you mean here. Are you saying that the physical is sufficient for emergence but there are possible ways in which intentionality, consciousness etc emerge without the physical?

    This is irrelevant to emergence, which just says that intentionality is present, consisting of components, none of which carry intentionality.
    OK, so you don't deny the emergence, but that it is intentionality at all since it is not its own, quite similar to how my intentions at work are that of my employer instead of my own intentions.
    noAxioms

    Good point. But note that if your intentions could be completely determined by your own employer, it would be questionable to call them 'your' intentions. Also, to emerge 'your' intentions would need the intentionality of your employer.

    Anyway even if I granted that, somehow, the machines could have an autonomous intentionality, there remains the fact that if intentionality, in order to emerge, needs always some other intentionality, intentionality is fundamental.

    So, yeah, I sort of agree that intentionality can come into being via emergence but it isn't clear how it could emerge from something that is completely devoid of it.

    It recognizes 2 and 3. It does not recognize the characters. That would require a image-to-text translator (like the one in the video, learning or not). Yes, it adds. Yes, it has a mechanical output that displays results in human-readable form. That's my opinion of language being appropriately applied. It's mostly a language difference (to choose those words to describe what its doing or not) and not a functional difference.noAxioms

    Again, I see it more like a machine doing an operation rather than a machine 'recognizing' anything. An engine that burns gasoline to give energy to a car and allowing it to move doesn't 'recognize' anything, yet it operates. In the same way, I doubt that a machine can recognize numbers in an analogous way we do but I still do not find any evidence that they do something more than doing an operation as an engine does. This to me applies both to the mechanical calculator and the computer in the video.

    An interesting question, however, arises. How can I be sure that humans (and, I believe, also animals at least) can 'recognize' numbers as I perceive myself doing? This is indeed a big question. Can we be certain that we - humans and (some?) animals do recognize numbers - while machines do not? I am afraid that such a certainty is beyond our reach.

    Still, I think it is reasonable that machines do not have such a faculty because they operate algorithmically (and those algorithms can be VERY complex and designed to approximate our own abilities).

    Cool. So similar to how humans do it. The post office has had image-to-text interpretation for years, but not sure how much those devices learn as opposed to just being programmed. Those devices need to parse cursive addresses, more complicated than digits. I have failed to parse some hand written numbers.
    My penmanship sucks, but I'm very careful when hand-addressing envelopes.
    noAxioms

    Do we just do that, though? It seems from our own phenomenological experience that we can have some control and self-awareness on our own 'operations' that machines do not have.

    That would be an interesting objective threshold of intelligence: any entity capable of [partially] comprehending itself.noAxioms

    I think I agree with that provided that one adds the word in the square brackets. The problem is: can we have an unmistaken criterion that allows us to objectively determine if a living being, machine or whatever has such an ability?
  • Harry Hindu
    5.8k
    The map is the first-person view. Is the map (first-person view) not part of the territory?
    — Harry Hindu
    I don't know what the territory is as you find distinct from said map.
    noAxioms
    It was a question to you about the distinction between territory and map. Is the map part of the territory? If there isn't a distinction, then that is basically solipsism. Solipsism implies that the map and the territory are one and the same. One might even say there is no map - only the territory as the mind is all there is.

    Fine, but I'm no naive realist. Perception is not direct, and I'm not even a realist at all. A physicalist need not be any of these things.noAxioms
    What does it even mean to be a physicalist? What does "physical" even mean? When scientists describe objects they say things like, "objects are mostly empty space" and describe matter as the relationship between smaller particles all the way down (meaning we never get at actual physical stuff - just more fundamental relationships, or processes) until we arrive in the quantum realm where "physical" seems to have no meaning, or is at least dependent upon our observations (measuring).

    Change over time, yes. There's other kinds of change.noAxioms
    Like...? You might say that there are changes in space but space is related to time. So maybe I should ask if there is an example of change independent of space-time. Space-time appears to be the medium in which change occurs.

    I suppose so, but I don't know how one might compare a 'rate of continuous perception' to a 'rate of continuous observed change'. Both just happen all the time. Sure, a fast car goes by in less time than a slow car, if that's what you're getting at.noAxioms
    Ever seen super slow motion video of a human's reaction time to stimuli? It takes time for your mind to become aware of its surroundings. You are always perceiving the world as it was in the past, so your brain has to make some predictions. Solid objects are still changing - just at a much slower rate.
    The simplified, cartoonish version of events you experience is what you refer to as "physical", where objects appear as solid objects that "bump" against each other because that is how the slower processes are represented on the map. How would you represent slow processes vs faster processes on a map?

    A non-naive physicalist would say that things like intentionality supervene on actual physical things, and not on the picture that our direct perceptions paint for us. I never suggested otherwise.noAxioms
    I don't understand. Is the picture not physical as well for a physicalist?

    How do you explain an illusion, like a mirage, if not intentionality supervening on the picture instead of on some physical thing?

    I don't know what it means for intentionality to supervene on actual physical things. But I do know that if you did not experience empty space in front of you and experienced the cloud of gases surrounding you you then your intentions might be quite different. Yet you act on the feeling of there being nothing in front of you, because that is how you visual experience is.

    How do you reconcile the third person view of another's brain (your first-person experience of another's brain - a third person view can only be had via a first-person view.) with their first person experience of empty space and visual depth? This talk of views seems to be confusing things. What exactly is a view? A process? Information?

    I see the old glass as moving due to it looking like a picture of flowing liquid, even though motion is not perceptible. A spinning top is a moving object since its parts are at different locations at different times, regardless of how it is perceived.noAxioms
    Maybe I should try this route - Does a spinning top look more like a wave than a particle, and when it stops does it look more like a particle than a wave? Is a spinning top a process? Is a top at rest a process - just a slower one? Isn't the visual experience of a wave-like blur of a spinning top the relationship between the rate of change of position of each part of the top relative to your position is space and the rate at which your eye-brain system can process the change it is observing. If your mental processing were faster then it would actually slow down the speed of the top to the point where it will appear as a stable, solid object standing perfectly balanced on its bottom peg.
  • noAxioms
    1.7k
    If physical processes weren't intelligible, how could we even do scienceboundless
    Doing science is how something less unintelligible becomes more intelligible.

    I was saying that if there was a time when intentionality didn't exist, it must have come into being 'in some way' at a certain moment.
    OK, that's a lot different than how I read the first statement.

    Merely giving an output after computing the most likely alternative doesn't seem to me the same thing as intentionality.
    I don't think the video was about intentionality. There are other examples of that, such as the robot with the repeated escape attempts, despite not being programmed to escape.

    The video was more about learning and consciousness.

    In my records, if you agree with [mathematics not being just a natural property of this universe, and thus 'supernatural'], you are not a 'physicalist'. Depends on definitions. I was unaware that the view forbade deeper, non-physical foundations. It only asserts that there isn't something else, part of this universe, but not physical. That's how I take it anyway.
    If we grant to science some ability to give us knowledge of physical reality, then we must assume that the physical world is intelligible.
    Partially intelligible, which is far from 'intelligible', a word that on its own implies nothing remaining that isn't understood.

    Like sarcasm, sometimes the 'level of confidence' comes out badly in discussions and people seem more confident about a given thing than they actually are.boundless
    Not sure where you think my confidence level is. I'm confident that monism hasn't been falsified. That's about as far as I go. BiV hasn't been falsified either, and it remains an important consideration, but positing that you're a BiV is fruitless.

    More of a not-unemergentist, distinct in that I assert that the physical is sufficient for emergence of these things, as opposed to asserting that emergence from the physical is necessary fact, a far more closed-minded stance. — noAxioms

    Not sure what you mean here. Are you saying that the physical is sufficient for emergence but there are possible ways in which intentionality, consciousness etc emerge without the physical?
    I'm saying that alternatives to such physical emergence has not been falsified, so yes, I suppose those alternative views constitute 'possible ways in which they exist without emergence from the physical'.

    Good point. But note that if your intentions could be completely determined by your own employer, it would be questionable to call them 'your' intentions.
    Just like you're questioning that a machine's intentions are not its own because some of them were determined by its programmer.

    Also, to emerge 'your' intentions would need the intentionality of your employer.
    No, since I am composed of parts, none of which have the intentionality of my employer. So it's still emergent, even if the intentions are not my own.

    there remains the fact that if intentionality, in order to emerge, needs always some other intentionality, intentionality is fundamental.
    That seems to be self contradictory. If it's fundamental, it isn't emergent, by definition.


    Again, I see it more like a machine doing an operation rather than a machine 'recognizing' anything.boundless
    The calculator doesn't know what it's doing, I agree. It didn't have to learn. It's essentially a physical tool that nevertheless does mathematics despite not knowing that it's doing that, similar to a screwdriver screwing despite not knowing it's doing that. Being aware of its function is not one of its functions.

    I still do not find any evidence that they do something more than doing an operation as an engine does.
    Agree.

    This to me applies both to the mechanical calculator and the computer in the video.
    Don't agree. The thing in the video learns. An engine does too these days, something that particularly pisses me off since I regularly have to prove to my engine that I'm human, and I tend to fail that test for months at a time. The calculator? No, that has no learning capability.

    An interesting question, however, arises. How can I be sure that humans (and, I believe, also animals at least) can 'recognize' numbers as I perceive myself doing?
    Dabbling in solipsism now? You can't see the perception or understanding of others, so you can only infer when others are doing the same thing.

    Still, I think it is reasonable that machines do not have such a faculty because they operate algorithmicallyHow do you know that you do not also operate this way? I mean, sure, you're not a Von-Neumann machine, but being one is not a requirement for operating algorithmicly. If you don't know how it works, then you can't assert that it doesn't fall under that category.
    More importantly, what assumptions are you making that preclude anything operating algorithmicly from having this understanding? How do you justify those assumptions? They seem incredibly biased to me.


    TIt was a question to you about the distinction between territory and map. Is the map part of the territory?Harry Hindu
    OK. It varies from case to case. Sometimes it is. The 'you are here' sign points to where the map is on the map, with the map being somewhere in the territory covered by the map.
    You solipsism question implies that you were asking a different question. OK. Yes, the map is distinct from the territory, but you didn't ask that. Under solipsism, they're not even distinct.

    Your prior post did eventually suggest a distinction between a perceived thing (a 3D apple say) and the ding an sich, with is neither 3D nor particularly even a 'thing'.

    What does it even mean to be a physicalist?
    Different people use the term different I suppose. I did my best a few posts back, something like "the view that all phenomena are the result of what we consider natural law of this universe", with 'this universe' loosely being defined as 'all contained by the spacetime which we inhabit'. I gave some challenges to that definition, such as the need to include dark matter under the category of 'natural law' to explain certain phenomena. Consciousness could similarly be added if it can be shown that it cannot emerge from current natural law, but such a proposal makes predictions, and those predictions fail so far.

    When scientists describe objects they say things like, "objects are mostly empty space" and describe matter as the relationship between smaller particles all the way down (meaning we never get at actual physical stuff - just more fundamental relationships, or processes) until we arrive in the quantum realm where "physical" seems to have no meaning, or is at least dependent upon our observations (measuring).
    All correct, which is why I didn't define 'physical' in terms of material, especially since they've never found any material. Yes, rocks are essentially clusters of quantum do-dads doing their quantumy stuff. There are no actual volume-filling particles, so 'mostly empty space' should actually get rid of 'mostly'.

    Change over time, yes. There's other kinds of change. — noAxioms
    Like...?
    e.g. The air pressure changes with altitude.

    So maybe I should ask if there is an example of change independent of space-time.
    In simplest terms, the function y = 0.3x, the y value changes over x. That being a mathematical structure, it is independent of any notion of spacetime. Our human thinking about that example of course is not independent of it. We cannot separate ourselves from spacetime.

    You are always perceiving the world as it was in the past, so your brain has to make some predictions.
    ...
    The simplified, cartoonish version of events you experience is what you refer to as "physical", where objects appear as solid objects that "bump" against each other because that is how the slower processes are represented on the map.
    Sure, one can model rigid balls bouncing off each other, or even simpler models than that if such serves a pragmatic purpose. I realize that's not what's going on. Even the flow of time is a mental construct, a map of sorts. Even you do it, referencing 'the past' like it was something instead of just a pragmatic mental convenience.

    How would you represent slow processes vs faster processes on a map?
    Depends on the nature of the map. If you're talking about perceptions, then it would be a perception of relative motion of two things over a shorter vs longer period of time, or possibly same time, but the fast one appears further away. If we're talking something like a spacetime diagram, then velocity corresponds to slopes of worldlines.

    I don't understand. Is the picture not physical as well for a physicalist?
    Sure it is, but the mental picture is not the intentionality, just the idea of it.

    How do you explain an illusion, like a mirage, if not intentionality supervening on the picture instead of on some physical thing?
    I don't understand this. A mirage is a physical thing. A camera can take a picture of one. No intentionality is required of the camera for it to do that. I never suggested that intentionality supervenes on any picture. Territories don't supervene on maps.


    I don't know what it means for intentionality to supervene on actual physical things. But I do know that if you did not experience empty space in front of you and experienced the cloud of gases surrounding you you then your intentions might be quite different. Yet you act on the feeling of there being nothing in front of you, because that is how you visual experience is.
    Yes, my experience and subsequent mental assessment of state (a physical map of sorts) influences what I choose to do. Is that so extraordinary?

    How do you reconcile the third person view of another's brain (your first-person experience of another's brain - a third person view can only be had via a first-person view.) with their first person experience of empty space and visual depth?[/quote]Sorry, you lost me, especially the bits in parenthesis.
    Other people (if they exist in the way that I do) probably have a first person view similar to my own. A third person description of their brain (or my own) is not required for this. I have no first person view of any brain, including my own. In old times, it wasn't obvious where the thinking went on. It took third person education to learn that.

    This talk of views seems to be confusing things. What exactly is a view? A process? Information?
    Probably a good question. In context of the title of this topic, I'm not actually sure about the former since I don't find baffling what others do. Third person is simply a description, language or whatever. A book is a good third person view of a given subject. First person is a subjective temporal point of view by some classical entity. Those biased would probably say that the entity has to be alive.

    Maybe I should try this route - Does a spinning top look more like a wave than a particle, and when it stops does it look more like a particle than a wave?
    It never looks like either. You're taking quantum terminology way out of context here. Quantum entities sometimes have wave-like properties and also particle-like properties, but those entities are never actually either of those things.

    Is a spinning top a process? Is a top at rest a process - just a slower one?
    Yes to all.

    Isn't the visual experience of a wave-like blur of a spinning top the relationship between the rate of change of position of each part of the top relative to your position is space and the rate at which your eye-brain system can process the change it is observing.
    Yea, pretty much. My eyes cannot follow it, even if they could follow linear motion at the same speed.

    If your mental processing were faster then it would actually slow down the speed of the top to the point where it will appear as a stable, solid object standing perfectly balanced on its bottom peg.
    I'd accept that statement. Clouds look almost static like that, until you watch a time-lapse video of them. You can see the motion, but only barely. In fast-mo, I've seen clouds break like waves against a beach.
  • Harry Hindu
    5.8k
    OK. It varies from case to case. Sometimes it is. The 'you are here' sign points to where the map is on the map, with the map being somewhere in the territory covered by the map.
    You solipsism question implies that you were asking a different question. OK. Yes, the map is distinct from the territory, but you didn't ask that. Under solipsism, they're not even distinct.
    noAxioms
    I can't think of a case where the map is never part of the territory, unless you are a solipsist, in which case they are one and the same, not part of the other.

    Your prior post did eventually suggest a distinction between a perceived thing (a 3D apple say) and the ding an sich, with is neither 3D nor particularly even a 'thing'.noAxioms
    I may make a distinction between an idea and something that is not an idea (I'm not an idealist). But I do not make a distinction between their existence. Santa Claus exists - as an idea. The question isn't whether Santa Claus exists or not. It does as we have "physical" representations (effects) of that idea (the cause) every holiday season. The question is, "what is the nature of its existence?". People are not confused about the existence of god. They are confused about the nature of god - is it just an idea, or does god exist as something more than just an idea?
  • boundless
    584
    Doing science is how something less unintelligible becomes more intelligible.noAxioms

    Ok.

    There are other examples of that, such as the robot with the repeated escape attempts, despite not being programmed to escape.noAxioms

    I'll try to find some of these things. Interesting.

    Partially intelligible, which is far from 'intelligible', a word that on its own implies nothing remaining that isn't understood.noAxioms

    Well, it depends on what we mean by 'intelligible'. A thing might be called 'intelligible' because it is fully understood or because it can be, in principle, understood completely*. That's why I tend to use the expressions 'partially intelligible' and 'intelligible' in a somewhat liberal manner.

    *This 'in principle' does not refer only to human minds. If there were minds that have higher abilities than our own it may be possible that they understand something that we do not and cannot. This doesn't mean that those things are 'unintelligible'.

    Not sure where you think my confidence level is. I'm confident that monism hasn't been falsified. That's about as far as I go. BiV hasn't been falsified either, and it remains an important consideration, but positing that you're a BiV is fruitless.noAxioms

    I believe that you believe that some alternatives are more reasonable than the others but you don't think that there is enough evidence to say that one particular theory is 'the right one beyond reasonable doubt'.

    I'm saying that alternatives to such physical emergence has not been falsified, so yes, I suppose those alternative views constitute 'possible ways in which they exist without emergence from the physical'.noAxioms

    Ok, thanks.

    No, since I am composed of parts, none of which have the intentionality of my employer. So it's still emergent, even if the intentions are not my own.noAxioms

    My point wasn't that the programmer's intentionality is part of the machine but, rather, it is a necessary condition for the machine to come into being. If the machine had intentionality, such an intentionality also depends on the intentionality of its builder, so we can't still say that the machine's intentionality emerged from purely 'inanimate' causes.

    Not a very strong argument but it is still an interesting point IMO (not that here I am conceding that we can build machines which have intentionality).

    Don't agree. The thing in the video learns. An engine does too these days, something that particularly pisses me off since I regularly have to prove to my engine that I'm human, and I tend to fail that test for months at a time. The calculator? No, that has no learning capability.noAxioms

    Mmm. I still don't get why. It seems to me that there is only a different of complexity. 'Learning' IMO would imply that the machine can change the algorithms according to which it operates (note that here I am not using the term 'learning' as to refer to the mere adding of information but, rather, something like learning an ability...).

    Dabbling in solipsism now? You can't see the perception or understanding of others, so you can only infer when others are doing the same thing.noAxioms

    Yes, I agree. But I am not sure that this inference is enough for certainty, except of the form of certainty 'for all practical purposes'.

    More importantly, what assumptions are you making that preclude anything operating algorithmicly from having this understanding? How do you justify those assumptions? They seem incredibly biased to me.noAxioms

    They are inferences that I can make based on my own experience. I might be wrong, of course, but it doesn't seem to me that I can explain all features of my mental activities in purely algorithmic terms (e.g. how I make some choices). I might concede, however, that I am not absolutely sure that there isn't an unknown alogorithmic explanation of all the operations that my mind can do.
    To change my view I need more convincing arguments that my - or any human being's - mind isn't algorithmic.
  • noAxioms
    1.7k
    Well, it depends on what we mean by 'intelligible'.boundless
    You've been leveraging the word now for many posts. Maybe you should have put out your definition of that if it means something other than 'able to be understood', as opposed to say 'able to be partially understood'.

    A thing might be called 'intelligible' because it is fully understood or because it can be, in principle, understood completely*.
    First of all, by whom? Something understood by one might still baffle another, especially if the other has a vested interest in keeping the thing in the unintelligible list, even if only by declaring the explanation as one of correlation, not causation.

    There are things that even in principle will never be understood completely, such as the true nature of the universe since there can never be tests that falsify different interpretations. From this it does not follow that physicalism fails. So I must deny that physicalism has any requirement of intelligibility, unless you have a really weird definition of it.

    I believe that you believe that some alternatives are more reasonable than the others
    Yup. Thus I have opinions. Funny that I find BiV (without even false sensory input) less unreasonable than magic.

    but you don't think that there is enough evidence to say that one particular theory is 'the right one beyond reasonable doubt'.
    One person's reasonable doubt is another's certainty. Look at all the people that know for certain that their religion of choice (all different ones) is the correct one. Belief is a cheap commodity with humans, rightfully so since such a nature makes us more fit. A truly rational entity would not be similarly fit, and thus seems unlikely to have evolved by natural selection.


    My point wasn't that the programmer's intentionality is part of the machine but, rather, it is a necessary condition for the machine to come into being.boundless
    If the machine was intentionally made, then yes, by definition. If it came into being by means other than a teleological one, then not necessarily so. I mean, arguably my first born came into being via intentionality, and the last not, despite having intentionality himself. Hence the condition is not necessary.
    There are more extreme examples of this, like the civil war case of a woman getting pregnant without ever first meeting the father, with a bullet carrying the sperm rather than any kind of intent being involved.

    If the machine had intentionality, such an intentionality also depends on the intentionality of its builder, so we can't still say that the machine's intentionality emerged from purely 'inanimate' causes.
    A similar argument seeks to prove that life cannot result from non-living natural (non-teleological) processes.


    'Learning' IMO would imply that the machine can change the algorithms according to which it operatesboundless
    That makes it sound like it rewrites its own code, which it probably doesn't. I've actually written self-modifying code, but it wasn't a case of AI or learning or anything, just efficiency or necessity.
    How does a human learn? We certainly adopt new algorithms for doing things we didn't know how to do before. We change our coding, which is essentially adding/strengthening connections. A machine is more likely to just build some kind of data set that can be referenced to do its tasks better than without it. We do that as well.

    Learning to walk is an interesting example since it is mostly subconscious nerve connections being formed, not data being memorized. I wonder how an AI would approach the task. They probably don't so much at this point since I've seen walking robots and they suck at it. No efficiency or fluidity at all, part of which is the fault of a hardware designer who gave it inefficient limbs.


    I might be wrong, of course, but it doesn't seem to me that I can explain all features of my mental activities in purely algorithmic terms (e.g. how I make some choices).
    They have machines that detect melanoma in skin images. There's no algorithm to do that. Learning is the only way, and the machines do it better than any doctor. Earlier, it was kind of a joke that machines couldn't tell cats from dogs. That's because they attempted the task with algorithms. Once the machine was able to just learn the difference the way humans do, the problem went away, and you don't hear much about it anymore.

    I might concede, however, that I am not absolutely sure that there isn't an unknown alogorithmic explanation of all the operations that my mind can do.
    Technically, anything a physical device can do can be simulated in software, which means a fairly trivial (not AI at all) algorithm can implement you. This is assuming a monistic view of course. If there's outside interference, then the simulation would fail.



    I can't think of a case where the map is never part of the territory, unless you are a solipsist, in which case they are one and the same, not part of the other.Harry Hindu
    Again, I'm missing your meaning because it's trivial. I have a map of Paris, and that map is not part of Paris since the map is not there. That's easy, so you probably mean something else by such statements. Apologies for not getting what that is, and for not getting why this point is helping me figure out why Chalmers finds the first person view so physically contradictory.

    Santa Claus exists - as an idea.
    So I would say that the idea of Santa exists, but Santa does not. When I refer to an ideal, I make it explicit. If I don't, then I'm not referring to the ideal, but (in the case of the apple say), the noumena. Now in the apple case, it was admittedly a hypothetical real apple, not a specific apple that would be a common referent between us. Paris on the other hand is a common referent.

    People are not confused about the existence of god.
    If that were so, there'd not be differing opinions concerning that existence, and even concerning the kind of existence meant.

    Yes, there is also disagreement about the nature of god. I mean, you're already asserting the nature by grammatically treating the word as a proper noun.
  • Harry Hindu
    5.8k
    Again, I'm missing your meaning because it's trivial. I have a map of Paris, and that map is not part of Paris since the map is not there. That's easy, so you probably mean something else by such statements. Apologies for not getting what that is, and for not getting why this point is helping me figure out why Chalmers finds the first person view so physically contradictory.noAxioms
    How does this example of your map representative of your mind as a map? Your map is always about where you are now (we are talking about your current experience of where you are - wherever you are.) As such, your map can never be somewhere other than in the territory you are in. If it makes it any easier, consider the entire universe as the territory and your map is always of the area you are presently in in that territory.

    My point is that if the map is part of the territory - meaning it is causally connected with the territory - then map and territory must be part of the same "stuff" to be able to interact. It doesn't matter what flavor of dualism you prefer - substance, property, etc. You still have to explain how physical things like brains and their neurons create an non-physical experience of empty space and visual depth.

    You can only determine what I see by looking at my neural activity and comparing it to others' neural activity. I can only determine what I see by having a visual experience that is made up of colors, shapes, etc. and comparing that to prior visual experiences - not neural activity.

    Our mental experience is the one thing we have direct access to, and are positive that exists, and other people's minds we have indirect access to, so it would seem to me that how we experience things indirectly are less like how they actually are compared to the things we experience directly. So when people talk about the "physical" nature of the world, they are confusing how it appears indirectly with how it is directly (since our map is part of the territory we experience part of the territory directly). The map is more like how reality is and the symbols on the map are more like how reality is (information) rather than what it represents is.

    So I would say that the idea of Santa exists, but Santa does not. When I refer to an ideal, I make it explicit. If I don't, then I'm not referring to the ideal, but (in the case of the apple say), the noumena. Now in the apple case, it was admittedly a hypothetical real apple, not a specific apple that would be a common referent between us. Paris on the other hand is a common referent.noAxioms
    Your idea is a common referent between us, else how could you talk about it to anyone? One might say that the scribbles you just typed are a referent between the scribbles and your idea and some reader. If ideas have just as much causal power as things that are not just ideas, then maybe the problem you're trying to solve stems from thinking of ideas and things that are not just ideas as distinct.

    If that were so, there'd not be differing opinions concerning that existence, and even concerning the kind of existence meant.

    Yes, there is also disagreement about the nature of god. I mean, you're already asserting the nature by grammatically treating the word as a proper noun.
    noAxioms
    The differing opinions concerning whether god exists or not is dependent upon what the nature of god is. Is god an extradimensional alien or is god simply an synonym for the universe?
  • noAxioms
    1.7k
    Your [mental] map is always about where you are now (we are talking about your current experience of where you are - wherever you are.)Harry Hindu
    One's current experience can be of somewhere other than where you are, but OK, most of the time, for humans at least, this is not so.

    If it makes it any easier, consider the entire universe as the territory and your map is always of the area you are presently in in that territory.
    My mental map (the first person one) rarely extends beyond my pragmatic needs of the moment. I hold other mental maps, different scales, different points of view, but you're not talking about those.
    A substance dualist might deny that the map has a location at all, a property only of 'extended' substances. Any form of dualism requires a causal connection to the territory if the territory exists. If it doesn't exist in the same way that the map exists, then we're probably talking about idealism or virtualism like BiV.

    My point is that if the map is part of the territory - meaning it is causally connected with the territory - then map and territory must be part of the same "stuff" to be able to interact.
    Does that follow? I cannot counter it. If the causal connection is not there, the map would be just imagination, not corresponding to any territory at all. I'll accept it then.

    It doesn't matter what flavor of dualism you prefer - substance, property, etc. You still have to explain how physical things like brains and their neurons create an non-physical experience of empty space and visual depth.
    I think the point of dualism is to posit that the brain doesn't do these things. There are correlations, but that's it. Not sure what the brain even does, and why we need a bigger one if the mental stuff is doing all the work. Not sure why the causality needs to be through the brain at all. I mean, all these reports of out-of-body experiences seem to suggest that the mental realm doesn't need physical sensory apparatus at all. Such reports also heavily imply a sort of naive direct realism.

    Our mental experience is the one thing we have direct access to, and are positive that existsHarry Hindu
    It 'existing' depends significantly on one's definition of 'exists'. Just saying.
    What we have direct access to is our mental interpretation of our sensory stream, which is quite different than direct access to our minds. If we had the latter, there'd be far less controversy about how minds work. So mind, as we imagine it, is might bear little correspondence to 'how it actually is'.

    So when people talk about the "physical" nature of the world, they are confusing how it appears indirectly with how it is directly
    Speak for yourself. For the most part I don't confuse this when talking about the physical nature of the world. Even saying 'the world' is a naive assumption based on direct experience.
    There are limits to what I know about this actual nature of things, and so inevitably assumptions from the map will fair to be recognized as such, and the model will be incomplete.

    since our map is part of the territory we experience part of the territory directly
    OK, but I experience an imagined map, and imagined things are processes of the territory of an implementation (physical or not) of the mechanism responsible for such processes.

    Your idea is a common referent between us, else how could you talk about it to anyone?
    That it is, and I didn't suggest otherwise.

    One might say that the scribbles you just typed are a referent between the scribbles and your idea and some reader. If ideas have just as much causal power as things that are not just ideas, then maybe the problem you're trying to solve stems from thinking of ideas and things that are not just ideas as distinct.
    Idealism is always an option, yes, but them not being distinct seems to lead to informational contradictions.


    And I must ask again, where is this all leading in terms of the thread topic?
  • boundless
    584
    You've been leveraging the word now for many posts. Maybe you should have put out your definition of that if it means something other than 'able to be understood', as opposed to say 'able to be partially understood'.noAxioms

    Let's take the weaker definition. Honestly, I don't think that anything changes in what I said.

    So I must deny that physicalism has any requirement of intelligibility, unless you have a really weird definition of it.noAxioms

    Partial intelligibility is still intelligibility. For instance, the reason why I don't believe that the Earth is only 100 years old is because a different age of the Earth better supports all evidence we have. This doesn't necessarily mean that absolutely everything about the Earth is intelligible but if I had not some faith in the ability of reason to understand what is the most likely explanation of the evidence I have I could not even undertake a scientific investigation.

    So, yeah I would say that intelligibility is certainly required to do science. And I doubt that there are physicalists that would seriously entertain the idea that science give us no real understading about physical reality.

    One person's reasonable doubt is another's certainty.noAxioms

    Of course people can be certain without a sufficient basis for being certain. A serious philosophical investigation should, however, give a higher awareness about the quality of the basis for one's beliefs.

    I hold beliefs that I admit are not 'proven beyond reasonable doubts'. There is nothing particularly wrong about having those beliefs if one is also sincere about the status of their foundation.

    There are more extreme examples of this, like the civil war case of a woman getting pregnant without ever first meeting the father, with a bullet carrying the sperm rather than any kind of intent being involved.noAxioms

    Good point. But in the case you mention one can object the baby is still conceived by humans who are intentional beings.

    An even more interesting point IMO would be abiogenesis. It is now accepted that life - and hence intentionality - 'came into being' from a lifeless state. So this would certainly suggest that intentionality can 'emerge from' something non-intentional.
    However, from what we currently know about the properties of what is 'lifeless', intentionality and other features do not seem to be explainable in terms of those properties. So what? Perhaps what we currently know about the 'lifeless' is incomplete.

    A similar argument seeks to prove that life cannot result from non-living natural (non-teleological) processes.noAxioms

    Yes, I know. However, unless a convincing objection can be made to the argument, the argument is still defensible.

    We change our coding, which is essentially adding/strengthening connections. A machine is more likely to just build some kind of data set that can be referenced to do its tasks better than without it. We do that as well.noAxioms

    Note that we can also do that with awareness.

    As a curiosity, what do you think about the Chinese room argument? I still haven't find convincing evidence that machines can do something that can't be explained in terms like that, i.e. that machines seem to have understanding of what they are doing without really understand it.

    They have machines that detect melanoma in skin images. There's no algorithm to do that. Learning is the only way, and the machines do it better than any doctor. Earlier, it was kind of a joke that machines couldn't tell cats from dogs. That's because they attempted the task with algorithms. Once the machine was able to just learn the difference the way humans do, the problem went away, and you don't hear much about it anymore.noAxioms

    Interesting. But how they 'learn'? Is that process of learning describable by algorithms? Are they programmed to learn the way they do?

    Technically, anything a physical device can do can be simulated in software, which means a fairly trivial (not AI at all) algorithm can implement you. This is assuming a monistic view of course. If there's outside interference, then the simulation would fail.noAxioms

    This IMO assumes more than just 'physicalism'. You also assume that all natural process are algorithmic.
  • Harry Hindu
    5.8k
    My mental map (the first person one) rarely extends beyond my pragmatic needs of the moment. I hold other mental maps, different scales, different points of view, but you're not talking about those.noAxioms
    Aren't I? What type of map is the third person one? How does one go from a first person view to a third person view? Do we ever get out of our first-person view?

    And I must ask again, where is this all leading in terms of the thread topic?noAxioms
    How is talk about first and third person views related to talk about direct and indirect realism? If one is a false dichotomy, would that make the other one as well?

    What if we defined the third person view as a simulated first person view?
  • noAxioms
    1.7k
    So, yeah I would say that intelligibility is certainly required to do science.boundless
    Fine, but it was especially emergence that I was talking about, not science.
    For instance, the complex structure of a snowflake is an emergent property of hydrogen and oxygen atoms. There are multiple definitions of strong vs weak emergence, but one along your lines suggests that intelligibility plays a distinguishing role. One could not have predicted snowflakes despite knowing the properties of atoms and water molecules, but having never seen snow. By one intelligibility definition, that's strong emergence. By another such definition (it's strong only if we continue to not understand it), it becomes weak emergence. If one uses a definition of strong emergence meaning that the snowflake property cannot even in principle be explained by physical interactions alone, then something else (said magic) is required, and only then is it strongly emergent.

    I hold beliefs that I admit are not 'proven beyond reasonable doubts'
    Worse, I hold beliefs that I know are wrong. It's contradictory, I know, but it's also true.

    Good point. But in the [conception/marriage by bullet] case you mention one can object the baby is still conceived by humans who are intentional beings.
    Being an intentional entity by no means implies that the event was intended.

    An even more interesting point IMO would be abiogenesis. It is now accepted that life - and hence intentionality - 'came into being' from a lifeless state.
    That's at best emergence over time, a totally different definition of emergence. Planet X didn't exist, but it emerged over time out of a cloud of dust. But the (strong/weak) emergence we're talking about is a planet made of of atoms, none of which are planets.

    However, from what we currently know about the properties of what is 'lifeless', intentionality and other features do not seem to be explainable in terms of those properties.
    I suggest that they've simply not been explained yet to your satisfaction, but there's no reason that they cannot in principle ever be explained in such terms.

    We change our coding, which is essentially adding/strengthening connections. A machine is more likely to just build some kind of data set that can be referenced to do its tasks better than without it. We do that as well. — noAxioms

    Note that we can also do that with awareness.
    What do you mean by this? Of what are we aware that a machine cannot be? It's not like I'm aware of my data structures or aware of connections forming or fading away. I am simply presented with the results of such subconscious activity.

    As a curiosity, what do you think about the Chinese room argument?
    A Chinese room is a computer with a person acting as a CPU. A CPU has no understanding of what it's doing. It just does it's job, a total automaton.
    The experiment was proposed well before LLMs, but it operates much like an LLM, with the CPU of the LLM (presuming there's only one) acting as the person. I could not get a clear enough description of the thought experiment to figure out how it works. There's apparently at least 4 lists of symbols and rules for correlations, but I could not figure out what each list was for. The third was apparently queries put to the room by outside Chinese speakers.

    I still haven't find convincing evidence that machines can do something that can't be explained in terms like that, i.e. that machines seem to have understanding of what they are doing without really understand it.
    It's not like any of my neurons understands what it's doing. Undertanding is an emergent property of the system operating, not a property of any of its parts. The guy in the Chinese room does not understand Chinese, nor does any of his lists. I suppose an argument can be made that the instructions (in English) have such understanding, but that's like saying a book understands its own contents, so I think that argument is easily shot down.

    Interesting. But how they 'learn'?
    Same way you do: Practice. Look at millions of images with known positive/negative status. After doing that a while, it leans what to look for despite the lack of explanation of what exactly matters.

    Is that process of learning describable by algorithms? Are they programmed to learn the way they do?
    I think so, similar to us. Either that or they program it to learn how to learn, or some such indirection like that.

    This IMO assumes more than just 'physicalism'. You also assume that all natural process are algorithmic.
    OK. Can you name a physical process that isn't? Not one that you don't know how works, but one that you do know, and it's not algorithmic.


    How does one go from a first person view to a third person view?Harry Hindu
    One does not go from one to the other. One holds a first person view while interacting with a third person view.

    Do we ever get out of our first-person view?
    Anesthesia?

    How is talk about first and third person views related to talk about direct and indirect realism?
    Haven't really figured that out, despite your seeming to drive at it. First/Third person can both be held at once. They're not the same thing, so I don't see it as a false dichotomy.
    Direct/indirect realism seem to be opposed to each other (so a true dichotomy?), and both opposed of course to not-realism (not to be confused with anti-realism which seems to posit mind being fundamental.

    If one is a false dichotomy, would that make the other one as well?
    I see no such connection between them that any such assignment of one would apply to the other.
  • Harry Hindu
    5.8k
    One does not go from one to the other. One holds a first person view while interacting with a third person view.noAxioms
    Can you provide an actual example of this?
    Haven't really figured that out, despite your seeming to drive at it. First/Third person can both be held at once. They're not the same thing, so I don't see it as a false dichotomy.noAxioms
    An example of first/third person held at once would be useful as well.

    Do we ever get out of our first-person view?
    Anesthesia?
    noAxioms
    Sure, but that would also get us out of the third person view, so I haven't seen you make a meaningful distinction between them (doesn't mean you haven't - just that I haven't seen it).

    Direct/indirect realism seem to be opposed to each other (so a true dichotomy?), and both opposed of course to not-realism (not to be confused with anti-realism which seems to posit mind being fundamental.noAxioms
    It appears to be a false dichotomy because we appear to have direct access to our own minds and indirect access to the rest of the world, so both are the case and it merely depends on what it is we are talking about. I wonder if the same is true of the first/third person dichotomy.

    In discussing first and third person views and direct and indirect realism, aren't we referring to our view on views? Are we a camera pointing back at itself creating an infinite feedback loop when discussing these things?

    What role does the observer effect in QM play in this conversation?
12345Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.